**Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation in the RENDRUS Network**

Aida Huerta Barrientos and

Yazmin Dillarza Andrade

about rendering of colours with three rendering engines and based on changes in the bright‐ ness of the object background. In the review part of the third chapter, advanced methods that enable visualization at micron resolution, methods used in 3D visualization workflow and methods used for research purposes are presented. The fourth chapter deals with com‐ puter simulation of different textile forms. In the fifth chapter, modelling and computer sim‐ ulation of microbial growth and metabolism kinetics, bioreactor dynamics and bioreactor feedback control are made to show the application methods and the usefulness of modelling and computer simulation methods in optimization of the bioprocess technology. Chapter 6 presents experimental problems related to research aiming at obtaining data necessary to formulate a physical model of deformation of steel containing a zone consisting of a mixture of the solid and the liquid phases. Chapter 7 presents an idea of constructing a scientific workshop focused on high-temperature processes, based upon a concept of integrated mod‐ elling combining the advantages of computer and physical simulations. Surrogate models provide an appealing data-driven strategy to accomplish these goals, for applications in‐ cluding design space exploration, optimization, visualization or sensitivity analysis, and these models are dedicated to the eighth chapter. High-frequency and microwave electro‐ magnetic fields are used in billions of various devices and systems. Design of these systems is impossible without detailed analysis of their electromagnetic field. Most microwave sys‐ tems are very complex, so analytical solution of the field equations for them is impossible. Computer simulation of high-frequency electromagnetic fields is shown in the ninth chap‐ ter. Chapter 10 proposes the modelling of the task allocation problem by the use of Coloured Petri Nets. The proposed methodology allows the construction of compact models for task scheduling problems. The PROMETHEE method, as a mathematical model for multi-criteria decision-making, is one of the ideal methods used when it is necessary to rank scenarios according to specific criteria, depending on whom the ranking is applied to. Chapter 11 presents various scenarios whose ranking is done according to defined criteria and weight

I would like to express my sincere gratitude to all the authors and coauthors for their contri‐ bution. The successful completion of the book *Computer Simulation* has been the result of the cooperation of many people. I would especially like to thank the Publishing Process Manag‐

> **Dragan Cvetković** Singidunum University

Faculty of Informatics and Computing

Belgrade, Republic of Serbia

er Ms. Nina Kalinić for her support during the publishing process.

coefficients for each of the stakeholders.

VIII Preface

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/67740

#### **Abstract**

Socio-environmental innovation is a process of social change that implies both the participation of agents on social and environmental initiatives and the generation and diffusion of relevant information, which lead social transformations for collective benefit. During the diffusion of socio-environmental innovations through a communication network, the information is created and shared among participants until mutual understanding is reached. In the case of National Network for Sustainable Rural Development (RENDRUS) network, getting innovations adopted is very difficult by people in rural communities due to the lack of effective communication channel. This study aims to develop a novel agent-based simulation model of socio-environmental innovation diffusion in the RENDRUS network based on complex adaptive systems approach. First, the conceptual model of socio-environmental innovation diffusion in the RENDRUS network based on complexity approach is developed. Then, an agent-based simulation model is implemented using Netlogo software, followed by the simulation model analysis and the design of plausible simulation scenarios. The simulation results illustrate how S-curve emerges from the interrelationships between agents considering endogenous and social cohesion effects. The conclusions argue that more social cohesion and popularity of socioenvironmental innovations between small rural producers and their organizations, governmental institutions, academic institutions and the knowledge society corresponds to less time to adopt socio-environmental innovations.

**Keywords:** complex adaptive systems, modelling and simulation, socio-environmental innovation, diffusion

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **1. Introduction**

As Sagarpa [1] explains, the Mexican National Network for Sustainable Rural Development (RENDRUS) network is a knowledge network for collaborative learning from producer to producer that contributes to diminishing the territorial inequality since it allows producers to know their own experiences. As Sagarpa [1] points out, the RENDRUS network contributes to food security through the development of capacities in rural communities, as well as the incorporation of new technologies that allow producers to establish areas for improvement in their productive, organizational and business processes. In this direction, the incorporation of innovations, the adoption of applied technology and the incorporation in the processes of extension to universities in rural areas led to the establishment of links between producers, their organizations and the knowledge society to generate sustainable rural development in Mexico.

the dynamic and evolution of CAS. As suggested by Viale and Pozzali [4], complex adaptive system research can teach us a series of useful lessons, especially those features necessitating the consideration of innovation systems as a complex adaptive system because studying the

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation...

http://dx.doi.org/10.5772/67740

3

As Yilmaz [5] points out, from the complexity perspective, the innovation is conceptualized as a CAS phenomenon that occurs at different scales, including individual, group and organizational levels. For instance, at the collective level, innovation can be conceptualized as global property that emerges from the local interactions of actors within a CAS, which influence one another in response to the influence they receive. In this direction, Yilmaz [5] suggests that in a study of CAS it is necessary to know structural and behavioural conditions, for the emergence and sustainment of innovation. As reported by Yilmaz [5], the benefits of abstracting a conceptual model of collective innovation based on the CAS perspective are as

• The empirical understanding of innovation is improved through a formal agent-based simulation model testbed by making it possible to explore particular types of observed

• Agent-based simulation models can be used as computational laboratories for the discovery

• Agent-based simulation acts as theory generation enabler by facilitating a better understanding of innovation complex dynamics over a full range of feasible configurations and

In the study of diffusion of innovations based on CAS perspective, Rogers et al. [6] explores the actual and potential hybridization of these two system theories, relying on illustrations from historical to practical applications of the diffusion of innovation model credited by Rogers [3], particularly the STOP AIDS communication campaign in San Francisco, USA. Modelling the diffusion of environmental innovations not only helps understanding diffusion processes but also enables researchers to develop scenarios of future use of these innovations, therefore indicating possible ways towards a more sustainable future, as Schwarz [7] stated. In the case study of the RENDRUS network, there is much interest in the diffusion of socio-environmental innovations in territories because it is one way to more sustainable use of natural resources. This chapter presents a novel agent-based simulation model of the diffusion of socio-environmental innovation in the RENDRUS network based on the CAS perspective in order to understand the diffusion process in the network for achieving sustainable rural development. The study is of relevance because obtaining an innovation and best practices in the value chain adopted is often very difficult to people in territories. Additionally, we consider that CAS approach gives a better basis for understanding the diffusion of socio-environmental innovation as an evolutionary network of functional elements that interact exhibiting characteristics of a complex adaptive system. An important aspect of the resulting simulation model is that it provides an analytical tool to support the decision-making of governmental institutions towards

dynamics of innovation is a complex task.

regularities via simulation scenarios.

behaviour of complex social systems.

sustainable rural development in Mexico.

of organizational designs that are conducive to innovation.

follows:

On the one hand, according to Quiroga and Barrera Gaytan [2], the socio-environmental innovation is a process of gradual change through action research in localized territories, which implies that a set of actors, based on their own interests, mission and capacity, participate in specific activities (scientific, technological, environmental, cultural, organizational, financial and commercial) whose orientation is not only to give a creative answer to linked problems of rural development and conservation of natural resources but also to generate learning that lead to the autonomy of the actors and structural transformations that are reflected in the collective benefit. Following Ref. [2], the socio-environmental innovation seeks to generate a flow of relevant information through channels and networks of interaction, promote the process of generation and diffusion of innovations and emphasize as central aspect the interconnection of these channels and networks. On the other hand, as Rogers [3] states, the diffusion of innovation is a communication process through certain channels over time among the members of a social system, where participants create and share information with one another in order to reach a mutual understanding. As described by Rogers [3], from the late 1920s to the early 1980s, the nine major diffusion traditions were anthropology, early sociology, rural sociology, education, medical sociology, communication, marketing, geography and general sociology; the problem faced was the research designs that consisted mainly of correlational analysis on data gathered in one-shot surveys of respondents (adopters and/or potential adopters of an innovation). More recently, the complex adaptive system (CAS) approach was used to analyse the spread of an innovation through a complex social system. The concept of complex adaptive system was introduced in 1967 by Walter Buckely to define a system in which large networks of essential components, without central control and simple rules of interaction between the network components, give rise to complex collective behaviour, sophisticated information processing and adaptation via learning and evolution. Derived from the interactions between CAS and the environment, the internal state of CAS changes through time-forming trajectories that may or may not converge to certain regions of its state space such as the attractors and repulsors, depending on the function they perform. In the study of CAS, it is interesting to know its emergent properties, those that arise at higher structural levels and are due to the interactions between the elements at a lower structural level. The approach of modelling and simulation has been used to better understand the dynamic and evolution of CAS. As suggested by Viale and Pozzali [4], complex adaptive system research can teach us a series of useful lessons, especially those features necessitating the consideration of innovation systems as a complex adaptive system because studying the dynamics of innovation is a complex task.

**1. Introduction**

2 Computer Simulation

Mexico.

As Sagarpa [1] explains, the Mexican National Network for Sustainable Rural Development (RENDRUS) network is a knowledge network for collaborative learning from producer to producer that contributes to diminishing the territorial inequality since it allows producers to know their own experiences. As Sagarpa [1] points out, the RENDRUS network contributes to food security through the development of capacities in rural communities, as well as the incorporation of new technologies that allow producers to establish areas for improvement in their productive, organizational and business processes. In this direction, the incorporation of innovations, the adoption of applied technology and the incorporation in the processes of extension to universities in rural areas led to the establishment of links between producers, their organizations and the knowledge society to generate sustainable rural development in

On the one hand, according to Quiroga and Barrera Gaytan [2], the socio-environmental innovation is a process of gradual change through action research in localized territories, which implies that a set of actors, based on their own interests, mission and capacity, participate in specific activities (scientific, technological, environmental, cultural, organizational, financial and commercial) whose orientation is not only to give a creative answer to linked problems of rural development and conservation of natural resources but also to generate learning that lead to the autonomy of the actors and structural transformations that are reflected in the collective benefit. Following Ref. [2], the socio-environmental innovation seeks to generate a flow of relevant information through channels and networks of interaction, promote the process of generation and diffusion of innovations and emphasize as central aspect the interconnection of these channels and networks. On the other hand, as Rogers [3] states, the diffusion of innovation is a communication process through certain channels over time among the members of a social system, where participants create and share information with one another in order to reach a mutual understanding. As described by Rogers [3], from the late 1920s to the early 1980s, the nine major diffusion traditions were anthropology, early sociology, rural sociology, education, medical sociology, communication, marketing, geography and general sociology; the problem faced was the research designs that consisted mainly of correlational analysis on data gathered in one-shot surveys of respondents (adopters and/or potential adopters of an innovation). More recently, the complex adaptive system (CAS) approach was used to analyse the spread of an innovation through a complex social system. The concept of complex adaptive system was introduced in 1967 by Walter Buckely to define a system in which large networks of essential components, without central control and simple rules of interaction between the network components, give rise to complex collective behaviour, sophisticated information processing and adaptation via learning and evolution. Derived from the interactions between CAS and the environment, the internal state of CAS changes through time-forming trajectories that may or may not converge to certain regions of its state space such as the attractors and repulsors, depending on the function they perform. In the study of CAS, it is interesting to know its emergent properties, those that arise at higher structural levels and are due to the interactions between the elements at a lower structural level. The approach of modelling and simulation has been used to better understand As Yilmaz [5] points out, from the complexity perspective, the innovation is conceptualized as a CAS phenomenon that occurs at different scales, including individual, group and organizational levels. For instance, at the collective level, innovation can be conceptualized as global property that emerges from the local interactions of actors within a CAS, which influence one another in response to the influence they receive. In this direction, Yilmaz [5] suggests that in a study of CAS it is necessary to know structural and behavioural conditions, for the emergence and sustainment of innovation. As reported by Yilmaz [5], the benefits of abstracting a conceptual model of collective innovation based on the CAS perspective are as follows:


In the study of diffusion of innovations based on CAS perspective, Rogers et al. [6] explores the actual and potential hybridization of these two system theories, relying on illustrations from historical to practical applications of the diffusion of innovation model credited by Rogers [3], particularly the STOP AIDS communication campaign in San Francisco, USA. Modelling the diffusion of environmental innovations not only helps understanding diffusion processes but also enables researchers to develop scenarios of future use of these innovations, therefore indicating possible ways towards a more sustainable future, as Schwarz [7] stated. In the case study of the RENDRUS network, there is much interest in the diffusion of socio-environmental innovations in territories because it is one way to more sustainable use of natural resources. This chapter presents a novel agent-based simulation model of the diffusion of socio-environmental innovation in the RENDRUS network based on the CAS perspective in order to understand the diffusion process in the network for achieving sustainable rural development. The study is of relevance because obtaining an innovation and best practices in the value chain adopted is often very difficult to people in territories. Additionally, we consider that CAS approach gives a better basis for understanding the diffusion of socio-environmental innovation as an evolutionary network of functional elements that interact exhibiting characteristics of a complex adaptive system. An important aspect of the resulting simulation model is that it provides an analytical tool to support the decision-making of governmental institutions towards sustainable rural development in Mexico.

The chapter is divided into five main sections. In Section 2, the conceptual for diffusion of socio-environmental innovation in the RENDRUS network based on CAS approach is developed. In Section 3, the agent-based simulation model for diffusion of socio-environmental innovation in the RENDRUS network is implemented using Netlogo software. The agentbased simulation model analysis and the design of plausible simulation scenarios for achieving sustainable rural development in Mexico are presented in Section 4. The concluding remarks are drawn in Section 5.

between them. It was suggested by Wooldridge and Jennings [16] that computational agents

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation...

http://dx.doi.org/10.5772/67740

5

• *Social skills*: the agents interacted with other agents through a computational language.

• *Reaction*: agents were able to perceive their environment and respond to it. The environment could be the physical world, a virtual world or a simulated world that includes other

• *Proactivity*: because the agents reacted to their environment, they themselves had to take

In general, the environment of agents is interpreted in terms of a metaphorical vocabulary of beliefs, desires, motives and emotions, which are generally applied more in the description of people. The agent´s attributes typically modelled are knowledge and beliefs, inferences, social models, representation of knowledge, goals, planning, language and emotions. According to Macal and North [17], an agent-based model contains the following four elements (see **Figure 1**):

• *Autonomy*: the agents had direct control of their actions and their internal state.

are typically characterized as follows:

the goal-oriented initiative.

• Agents, their attributes and environment.

**Figure 1.** A network of elements in a typical agent-based model.

• Relationships between agents and the rules of interaction.

• A connectivity network that defines how and with whom agents interact.

• Agent´s environment that interacts with agents, interchanging information.

agents.

## **2. A conceptual model for diffusion of socio-environmental innovation in the RENDRUS network based on CAS perspective**

#### **2.1. Conceptual model development**

According to Sayama [8], the various modelling approaches can be put into the following major families:


#### **2.2. Agent-based modelling and simulation (ABMS)**

CAS perspective concerns with elements called agents that learn or adapt in response to rulebased non-linear interactions with other agents generating the behaviour and the hierarchical structure of a CAS; particular combinations of agents at one level become agents at the next level up [9]. As the result of the non-linear interactions, the agents are able to cooperate and evolve, improving in certain cases their fitness over time [10]. In the mid-1990s, the ABMS computational approach was recognized by Holland as a foundational methodology to the study of CAS [11]. ABMS is a form of computational modelling whereby a phenomenon is modelled in terms of agents and their interactions [12] using the bottom-up perspective in the sense that the system behaviour that we observe in the model emerges from the bottom of the system by the direct interrelations of the agents from the basis of the model [13]. So, it is possible to understand the simple interaction rules between agents at local level. The definition of a space scale is fundamental in ABMS, while the time scale varies in discrete steps. From the computational perspective, the software that is used to program agents has its origins in the areas of Artificial Intelligence, especially in the subfield of Distributed Artificial Intelligence [14, 15], whose objective is the study of agent´s properties and the design of networks of interaction between them. It was suggested by Wooldridge and Jennings [16] that computational agents are typically characterized as follows:


In general, the environment of agents is interpreted in terms of a metaphorical vocabulary of beliefs, desires, motives and emotions, which are generally applied more in the description of people. The agent´s attributes typically modelled are knowledge and beliefs, inferences, social models, representation of knowledge, goals, planning, language and emotions. According to Macal and North [17], an agent-based model contains the following four elements (see **Figure 1**):

• Agents, their attributes and environment.

The chapter is divided into five main sections. In Section 2, the conceptual for diffusion of socio-environmental innovation in the RENDRUS network based on CAS approach is developed. In Section 3, the agent-based simulation model for diffusion of socio-environmental innovation in the RENDRUS network is implemented using Netlogo software. The agentbased simulation model analysis and the design of plausible simulation scenarios for achieving sustainable rural development in Mexico are presented in Section 4. The concluding remarks

**2. A conceptual model for diffusion of socio-environmental innovation in** 

According to Sayama [8], the various modelling approaches can be put into the following

• *Descriptive modelling*. In this approach, modellers try to specify the state of a system at macro-level at a given time point, capturing what the system looks like. This can be done taking a picture, creating prototypes such as physical models or using quantitative meth-

• *Rule-based modelling*. In this approach, modellers try to come up with dynamical rules at micro-level that explain the observed macro-behaviour of a system. The modelling methodologies mainly used are cellular automata, network models, agent-based models and

CAS perspective concerns with elements called agents that learn or adapt in response to rulebased non-linear interactions with other agents generating the behaviour and the hierarchical structure of a CAS; particular combinations of agents at one level become agents at the next level up [9]. As the result of the non-linear interactions, the agents are able to cooperate and evolve, improving in certain cases their fitness over time [10]. In the mid-1990s, the ABMS computational approach was recognized by Holland as a foundational methodology to the study of CAS [11]. ABMS is a form of computational modelling whereby a phenomenon is modelled in terms of agents and their interactions [12] using the bottom-up perspective in the sense that the system behaviour that we observe in the model emerges from the bottom of the system by the direct interrelations of the agents from the basis of the model [13]. So, it is possible to understand the simple interaction rules between agents at local level. The definition of a space scale is fundamental in ABMS, while the time scale varies in discrete steps. From the computational perspective, the software that is used to program agents has its origins in the areas of Artificial Intelligence, especially in the subfield of Distributed Artificial Intelligence [14, 15], whose objective is the study of agent´s properties and the design of networks of interaction

**the RENDRUS network based on CAS perspective**

are drawn in Section 5.

4 Computer Simulation

major families:

**2.1. Conceptual model development**

ods such as statistic models.

**2.2. Agent-based modelling and simulation (ABMS)**

dynamical equations.


**Figure 1.** A network of elements in a typical agent-based model.

## **2.3. The RENDRUS network modelled as a CAS**

The RENDRUS network conceptualized as a CAS presents the following characteristics (see **Figure 2**):

• *The impact of the social structure*. The interactions between members of the RENDRUS network are not random but are rather limited by social networks, both internally and externally. The social structure and socio-cultural interactions between members have a crucial effect on the process of evolution of the adoption of socio-environmental innovations. In this case,

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation...

http://dx.doi.org/10.5772/67740

7

**2.4. The diffusion of socio-environmental-innovation model in the RENDRUS network** 

The diffusion in the RENDRUS network is the process by which socio-environmental innovation is communicated through certain channels (meetings, Internet, etc.) over time among rural producers and their organizations, governmental institutions and academic institutions. In this case, when new answers to complex problems regarding sustainable rural development emerge from the interactions between rural producers and governmental and academic institutions and these are diffused and adopted or rejected by rural producers, some alterations occur at functional and structural level of the RENDRUS network, originating a social change due to the activation of peer networks about such socio-environmental innovation in the next four areas: transformation of primary, agricultural and livestock production, rural industry and marketing, rural non-agricultural services and handicrafts, and rural extension. Following Ref. [3], we present the five attributes of socio-environmental innovations of the

• *Relative advantage*. The degree to which rural producers perceive the socio-environmental innovation as being better than the practice it supersedes. The exchange of innovationevaluation information between rural producers of the RENDRUS network, using the com-

• *Compatibility*. The degree to which the socio-environmental innovation is perceived by rural producers as consistent with the existing socio-cultural values, past experiences and

• *Complexity*. The degree to which rural producers perceive the socio-environmental innova-

• Trialability. The degree to which the socio-environmental innovation may be experimented

• *Observability*. The degree to which the results of the socio-environmental innovation are

In summary, socio-environmental innovations of the RENDRUS network that are perceived as relatively advantageous, compatible with existing socio-cultural values, beliefs and experiences, relatively easy to adapt, observable and divisible for trial, will be adopted more rapidly

As discussed by Bass [18], in the literature, two models for how innovations diffuse through social systems are recognized: endogenous and exogenous. On the one hand, in the endogenous

the principle of the social structure is based on the community.

**based on CAS approach**

RENDRUS network as follows:

munication network, lies the diffusion process.

tion as relatively difficult to understand and use.

specific needs of potential adopters.

with on a limited basis.

by rural producers.

visible to other rural producers.


**Figure 2.** The conceptual model of the RENDRUS network based on CAS approach.

• *The impact of the social structure*. The interactions between members of the RENDRUS network are not random but are rather limited by social networks, both internally and externally. The social structure and socio-cultural interactions between members have a crucial effect on the process of evolution of the adoption of socio-environmental innovations. In this case, the principle of the social structure is based on the community.

**2.3. The RENDRUS network modelled as a CAS**

(see **Figure 2**):

6 Computer Simulation

benefit.

The RENDRUS network conceptualized as a CAS presents the following characteristics

• *Multiple key heterogeneous agents*. Small rural producers and their organizations, govern-

• *Different structural levels*. At micro-level, the RENDRUS network is constituted by rural producers interacting with one another, and with governmental and academic institutions, while at macro-level, the collaborative learning emerges for leading to the autonomy of rural producers and the structural transformations, which are reflected in the collective

• *Intrinsic diversity among its key agents*. Small producers from the 32 Mexican federal states are product of the exposure of their own experiences within a socio-economic, cultural and environmental context. So, the incorporation and adoption of new socio-environmental innovations, in order to establish areas for improvement in productive, organizational and

• *Functional dynamics*. The RENDRUS network is an open system that exchanges relevant information containing producer´s experiences, with the complex environment. In order to survive (increasing the participation of rural producers and governmental and academic institutions), the RENDRUS network has to adapt itself to new conditions imposed by the environment, adjusting its functional units through the modification and selection of new

mental institutions, academic institutions and the knowledge society.

business processes, are determined by local requirements.

**Figure 2.** The conceptual model of the RENDRUS network based on CAS approach.

socio-environmental innovations.

## **2.4. The diffusion of socio-environmental-innovation model in the RENDRUS network based on CAS approach**

The diffusion in the RENDRUS network is the process by which socio-environmental innovation is communicated through certain channels (meetings, Internet, etc.) over time among rural producers and their organizations, governmental institutions and academic institutions. In this case, when new answers to complex problems regarding sustainable rural development emerge from the interactions between rural producers and governmental and academic institutions and these are diffused and adopted or rejected by rural producers, some alterations occur at functional and structural level of the RENDRUS network, originating a social change due to the activation of peer networks about such socio-environmental innovation in the next four areas: transformation of primary, agricultural and livestock production, rural industry and marketing, rural non-agricultural services and handicrafts, and rural extension.

Following Ref. [3], we present the five attributes of socio-environmental innovations of the RENDRUS network as follows:


In summary, socio-environmental innovations of the RENDRUS network that are perceived as relatively advantageous, compatible with existing socio-cultural values, beliefs and experiences, relatively easy to adapt, observable and divisible for trial, will be adopted more rapidly by rural producers.

As discussed by Bass [18], in the literature, two models for how innovations diffuse through social systems are recognized: endogenous and exogenous. On the one hand, in the endogenous diffusion model, how fast an innovation spreads is a function of its own popularity between the members of the social system. Following Ref. [18], in this case the proportion of the social system that has adopted the innovation over time starts out slow, slowly builds to a critical mass, where it achieves exponential growth, and finally levels off as it saturates the social system. The model, as seen in **Figure 3**, is a bell-shaped curve that shows the data about the general endogenous diffusion process on a frequency basis, whereas the s-shaped curve shows the same data on a cumulative basis [3]. On the other hand, in the exogenous diffusion model, potential adopters respond directly to exogenous effects.

From the CAS perspective, both variety and reactivity are necessary for the diffusion of an innovation [6]. Variety refers to the diverse population for emergence and adaptation and it is necessary for information exchange that takes place between an innovation sender and an innovation receiver, whereas reactivity refers to the sensitivity to change where only those populations with adaptability to change can survive at the higher fitness thresholds that occur during cascading mutation/extinction [6]. The information exchange between an innovation sender and an innovation receiver is a critical process in the diffusion of socio-environmental innovation in the RENDRUS network. It means that more information about best practices and new technology corresponds to less uncertainty on rural producer perceptions about relative advantage, compatibility and complexity of innovations, creating better conditions to the emergence and adaptation. The diffusion of socio-environmental-innovation model in the RENDRUS network based on CAS approach contains the following four elements (see **Figure 4**):

• *Heterogeneous agents*. Small rural producers and their organizations, governmental institutions, academic institutions and the knowledge society. The agents (*ag*) have two attributes: the number of neighbours having already adopted the socio-environmental innovations (*an*) and a random threshold for adoption (*rt*) with a normal distribution *N*(100,25) to satisfy the heterogeneity condition. The Bass model [18] allows evaluating the evolution of both endogenous and exogenous diffusion as follows:

*f*

vation, *p* coefficient refers to the exogenous effect, and *qFt*

where *Ft*

*<sup>t</sup>* = (*p* + *q Ft*)(1 − *Ft* ) , (1)

coefficient indicates the endog-

http://dx.doi.org/10.5772/67740

9

describes the proportion of potential adopters having already adopted the inno-

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation...

enous effect at time *t*. The endogenous diffusion model is a special case of Eq. (1) when *p* = 0. Additionally, one mechanism assumed by endogenous diffusion models is contagion, whereby those who have adopted the innovation directly promote the innovation to those with whom they are in contact [19]. In this study, we use a variant of endogenous diffusion model based on threshold called information cascade, whereby the number of prior

**Figure 4.** The Netlogo grid based on four different values for MAX-PXCOR and MAXPYCOR, (a) MAX-PXCOR = MAXPYCOR = 16 with a total of 256 patches, (b) MAX-PXCOR = MAXPYCOR = 30 with a total of 900 patches, (c) MAX-PXCOR = MAXPYCOR = 50 with a total of 2500 and (d) MAX-PXCOR = MAXPYCOR = 100 with a total of 10,000 patches.

**Figure 3.** The endogenous diffusion model of an innovation over time by members of a social system, adopted from Ref. [3].

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation... http://dx.doi.org/10.5772/67740 9

diffusion model, how fast an innovation spreads is a function of its own popularity between the members of the social system. Following Ref. [18], in this case the proportion of the social system that has adopted the innovation over time starts out slow, slowly builds to a critical mass, where it achieves exponential growth, and finally levels off as it saturates the social system. The model, as seen in **Figure 3**, is a bell-shaped curve that shows the data about the general endogenous diffusion process on a frequency basis, whereas the s-shaped curve shows the same data on a cumulative basis [3]. On the other hand, in the exogenous diffusion model, potential adopters

From the CAS perspective, both variety and reactivity are necessary for the diffusion of an innovation [6]. Variety refers to the diverse population for emergence and adaptation and it is necessary for information exchange that takes place between an innovation sender and an innovation receiver, whereas reactivity refers to the sensitivity to change where only those populations with adaptability to change can survive at the higher fitness thresholds that occur during cascading mutation/extinction [6]. The information exchange between an innovation sender and an innovation receiver is a critical process in the diffusion of socio-environmental innovation in the RENDRUS network. It means that more information about best practices and new technology corresponds to less uncertainty on rural producer perceptions about relative advantage, compatibility and complexity of innovations, creating better conditions to the emergence and adaptation. The diffusion of socio-environmental-innovation model in the RENDRUS net-

work based on CAS approach contains the following four elements (see **Figure 4**):

both endogenous and exogenous diffusion as follows:

• *Heterogeneous agents*. Small rural producers and their organizations, governmental institutions, academic institutions and the knowledge society. The agents (*ag*) have two attributes: the number of neighbours having already adopted the socio-environmental innovations (*an*) and a random threshold for adoption (*rt*) with a normal distribution *N*(100,25) to satisfy the heterogeneity condition. The Bass model [18] allows evaluating the evolution of

**Figure 3.** The endogenous diffusion model of an innovation over time by members of a social system, adopted from Ref. [3].

respond directly to exogenous effects.

8 Computer Simulation

**Figure 4.** The Netlogo grid based on four different values for MAX-PXCOR and MAXPYCOR, (a) MAX-PXCOR = MAXPYCOR = 16 with a total of 256 patches, (b) MAX-PXCOR = MAXPYCOR = 30 with a total of 900 patches, (c) MAX-PXCOR = MAXPYCOR = 50 with a total of 2500 and (d) MAX-PXCOR = MAXPYCOR = 100 with a total of 10,000 patches.

$$f\_i = (p + q \, F\_i)(1 - F\_i) \, , \tag{1}$$

where *Ft* describes the proportion of potential adopters having already adopted the innovation, *p* coefficient refers to the exogenous effect, and *qFt* coefficient indicates the endogenous effect at time *t*. The endogenous diffusion model is a special case of Eq. (1) when *p* = 0. Additionally, one mechanism assumed by endogenous diffusion models is contagion, whereby those who have adopted the innovation directly promote the innovation to those with whom they are in contact [19]. In this study, we use a variant of endogenous diffusion model based on threshold called information cascade, whereby the number of prior adoptions is a source of credible information. The importance of threshold models is the aggregation of popularity [20].

• *Rules of interaction*. Agents interact with their neighbours and the decision for adopting socio-environmental innovations depends if *rt* value is less than a non-linear function evaluated in terms of the number of adopters (*na*), the endogenous effect (*ee*) and the social cohesion between agents (*sc*) whose range can be dynamically updated to revise the *rt* of an agent, as follows:

$$r t \quad \text{\*} \quad ee^\* \left(\frac{na}{ag}\right) \text{+} \left(an^\* \text{sc}\right) \tag{2}$$

(http://rendrus.extensionismo.mx/rendrus/rendrus). The artificial worlds created vary in the following experimental factors: (a) endogenous and (b) social cohesion effects. We use them to study the rate of adoption, as the relative speed with which socio-environmental innovations are adopted along time. The endogenous effects refer to the popularity of socioenvironmental innovations, based on the relative advantage that rural producers perceive from the socio-environmental innovation as being better than the practice it supersedes. The popularity increases when the results of the socio-environmental innovation are visible to other rural producers, whereas social cohesion effects represent the compatibility perceived by rural producers about the innovation as consistent with the existing socio-cultural values, past experiences and specific needs of potential adopters. **Table 1** shows the endogenous and

**Figure 5.** The Netlogo grid including turtles and patches in 2D and 3D. Netlogo grid space of patches in (a) 2D and (b) 3D.

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation...

http://dx.doi.org/10.5772/67740

11

The time variable allows us to plot the number of RENDRUS members adopting the socioenvironmental innovation along time. In this model, each 'tick' from Netlogo represents 1 day in the time scale. The simulation model starts with one agent who has already adopted the socioenvironmental innovation. The simulation model ends after 1200 simulation days (ticks). As can be seen in **Figures 6**–**9**, the proportion of adopters starts out slow due to the interactions between heterogeneous agents then, slowly builds to a critical mass growing exponentially, and finally the number of adopters saturates the artificial world that represents the RENDRUS network.

As Wilensky and Rand [12] explains, simulation model validation is the process of determining whether the implemented simulation model corresponds to some phenomenon in

**Simulation parameter Lower value Upper value**

Endogenous effects (*ee*) 10 20 Social cohesion effects (*sc*) 30 50

social cohesion effect´s numerical values used in the simulation model.

**3.3. Agent-based simulation model validation**

**Table 1.** The range of simulation parameters.


## **3. Agent-based simulation model for the diffusion of socioenvironmental innovation in the RENDRUS network**

#### **3.1. Netlogo simulation software**

Netlogo software, first developed in the late 1990s by Uri Wilensky, is a general-purpose agent-based modelling language used worldwide that provides a graphical modelling environment [21], freely available on the Netlogo website (https://ccl.northwestern.edu/netlogo). As Wilensky [22] explains, it is an extension of the Logo language in which user controls a graphical turtle by issuing commands, and it includes a grid of patches, each patch is a cell computationally active. Turtles and patches are self-contained objects with internal local state. In Netlogo models, time passes in discrete steps, called ticks. Netlogo allows to scale space and time, such as 1 m2 = 1 patch and one tick can represent a minute or a day, and so on. The grid can be adjusted using the MAX-PXCOR (horizontal direction) and MAXPYCOR (vertical direction) in two-dimension (2D), to create a larger or shorter world keeping the view a manageable size on the main screen, for a total of MAX-PXCOR \* MAXPYCOR patches. **Figure 4** illustrates the Netlogo grid with different MAX-PXCOR and MAXPYCOR values, where the worlds wrap both horizontally and vertically. As seen in **Figure 5**, the grid can be visualized in 2D and three dimension (3D).

#### **3.2. The agent-based simulation model implementation**

Our agent-based simulation model implemented in Netlogo software produces four artificial worlds, each populated by 1000, 2000, 3000 and 4500 heterogeneous agents, respectively. In order to ensure that the simulation parameters are realistic, we base our agents upon RENDRUS network data about small rural producers available on the RENDRUS website Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation... http://dx.doi.org/10.5772/67740 11

**Figure 5.** The Netlogo grid including turtles and patches in 2D and 3D. Netlogo grid space of patches in (a) 2D and (b) 3D.

(http://rendrus.extensionismo.mx/rendrus/rendrus). The artificial worlds created vary in the following experimental factors: (a) endogenous and (b) social cohesion effects. We use them to study the rate of adoption, as the relative speed with which socio-environmental innovations are adopted along time. The endogenous effects refer to the popularity of socioenvironmental innovations, based on the relative advantage that rural producers perceive from the socio-environmental innovation as being better than the practice it supersedes. The popularity increases when the results of the socio-environmental innovation are visible to other rural producers, whereas social cohesion effects represent the compatibility perceived by rural producers about the innovation as consistent with the existing socio-cultural values, past experiences and specific needs of potential adopters. **Table 1** shows the endogenous and social cohesion effect´s numerical values used in the simulation model.

The time variable allows us to plot the number of RENDRUS members adopting the socioenvironmental innovation along time. In this model, each 'tick' from Netlogo represents 1 day in the time scale. The simulation model starts with one agent who has already adopted the socioenvironmental innovation. The simulation model ends after 1200 simulation days (ticks). As can be seen in **Figures 6**–**9**, the proportion of adopters starts out slow due to the interactions between heterogeneous agents then, slowly builds to a critical mass growing exponentially, and finally the number of adopters saturates the artificial world that represents the RENDRUS network.

#### **3.3. Agent-based simulation model validation**

adoptions is a source of credible information. The importance of threshold models is the

• *Rules of interaction*. Agents interact with their neighbours and the decision for adopting socio-environmental innovations depends if *rt* value is less than a non-linear function evaluated in terms of the number of adopters (*na*), the endogenous effect (*ee*) and the social cohesion between agents (*sc*) whose range can be dynamically updated to revise the *rt* of

> \_ *na*

• *A connectivity network*. The network is built generating each agent from 0 to 4500. Once an agent is created (old agent), a new referred position is searched for five neighbours, to cre-

• Agents interchange information indicating the number of their own neighbours having

Netlogo software, first developed in the late 1990s by Uri Wilensky, is a general-purpose agent-based modelling language used worldwide that provides a graphical modelling environment [21], freely available on the Netlogo website (https://ccl.northwestern.edu/netlogo). As Wilensky [22] explains, it is an extension of the Logo language in which user controls a graphical turtle by issuing commands, and it includes a grid of patches, each patch is a cell computationally active. Turtles and patches are self-contained objects with internal local state. In Netlogo models, time passes in discrete steps, called ticks. Netlogo allows to scale space

grid can be adjusted using the MAX-PXCOR (horizontal direction) and MAXPYCOR (vertical direction) in two-dimension (2D), to create a larger or shorter world keeping the view a manageable size on the main screen, for a total of MAX-PXCOR \* MAXPYCOR patches. **Figure 4** illustrates the Netlogo grid with different MAX-PXCOR and MAXPYCOR values, where the worlds wrap both horizontally and vertically. As seen in **Figure 5**, the grid can be visualized

Our agent-based simulation model implemented in Netlogo software produces four artificial worlds, each populated by 1000, 2000, 3000 and 4500 heterogeneous agents, respectively. In order to ensure that the simulation parameters are realistic, we base our agents upon RENDRUS network data about small rural producers available on the RENDRUS website

= 1 patch and one tick can represent a minute or a day, and so on. The

already adopted the socio-environmental innovations (prior adoptions).

**3. Agent-based simulation model for the diffusion of socio-**

**environmental innovation in the RENDRUS network**

*ag*) + (*an* \* *sc* ) (2)

aggregation of popularity [20].

10 Computer Simulation

an agent, as follows:

ate the new agent.

**3.1. Netlogo simulation software**

and time, such as 1 m2

in 2D and three dimension (3D).

**3.2. The agent-based simulation model implementation**

*rt* < *ee* \* (

As Wilensky and Rand [12] explains, simulation model validation is the process of determining whether the implemented simulation model corresponds to some phenomenon in


**Table 1.** The range of simulation parameters.

**Figure 6.** An artificial world populated by 1000 heterogeneous agents, *ee* = 10, *sc* = 30.

**Figure 7.** An artificial world populated by 2000 heterogeneous agents, *ee* = 10, *sc* = 30.

the real world. Our agent-based simulation model was validated using the dynamic technique called sensitivity analysis. Through it, values of simulation parameters are systematically changed over some range of interest and the simulation model´s behaviour is observed [23]. This technique allows identifying the simulation parameters to which the simulation model behaviour is very sensitive. The simulation parameter considered to carry out the sensitivity analysis is the social cohesion effect (*sc*). In this case, the agent-based simulation model is executed considering two values for this parameter: 20 and 50. The artificial world considers 4500 heterogeneous agents. The endogenous effect (*ee*) is fixed on 10. The simulation model ends after 1200 simulation days (ticks). The simulation results are illustrated

in **Figures 10** and **11**. As can be seen in **Figure 10**, the proportion of adopters reaches near 50% of the total population in 1200 ticks. In this case, the rate of adoption, as the relative speed with which socio-environmental innovations are adopted along time, is very slow. On the contrary, as can be seen in **Figure 11** the proportion of adopters starts out slow then, grows exponentially, and finally just in 298 ticks the number of adopters saturates the artificial world. Therefore, more social cohesion among agents in the RENDRUS network corresponds to less time to adopt socio-environmental innovations and *vice versa*, and less

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation...

http://dx.doi.org/10.5772/67740

13

**Figure 9.** An artificial world populated by 4500 heterogeneous agents, *ee* = 10, *sc* = 30.

**Figure 8.** An artificial world populated by 3000 heterogeneous agents, *ee* = 10, *sc* = 30.

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation... http://dx.doi.org/10.5772/67740 13

**Figure 8.** An artificial world populated by 3000 heterogeneous agents, *ee* = 10, *sc* = 30.

**Figure 9.** An artificial world populated by 4500 heterogeneous agents, *ee* = 10, *sc* = 30.

the real world. Our agent-based simulation model was validated using the dynamic technique called sensitivity analysis. Through it, values of simulation parameters are systematically changed over some range of interest and the simulation model´s behaviour is observed [23]. This technique allows identifying the simulation parameters to which the simulation model behaviour is very sensitive. The simulation parameter considered to carry out the sensitivity analysis is the social cohesion effect (*sc*). In this case, the agent-based simulation model is executed considering two values for this parameter: 20 and 50. The artificial world considers 4500 heterogeneous agents. The endogenous effect (*ee*) is fixed on 10. The simulation model ends after 1200 simulation days (ticks). The simulation results are illustrated

**Figure 6.** An artificial world populated by 1000 heterogeneous agents, *ee* = 10, *sc* = 30.

12 Computer Simulation

**Figure 7.** An artificial world populated by 2000 heterogeneous agents, *ee* = 10, *sc* = 30.

in **Figures 10** and **11**. As can be seen in **Figure 10**, the proportion of adopters reaches near 50% of the total population in 1200 ticks. In this case, the rate of adoption, as the relative speed with which socio-environmental innovations are adopted along time, is very slow. On the contrary, as can be seen in **Figure 11** the proportion of adopters starts out slow then, grows exponentially, and finally just in 298 ticks the number of adopters saturates the artificial world. Therefore, more social cohesion among agents in the RENDRUS network corresponds to less time to adopt socio-environmental innovations and *vice versa*, and less

other coding errors that cause runtime errors during compilation and execution process interrupt the simulation [24]. In this phase, we eliminated the 'bugs' from the code so the model was

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation...

http://dx.doi.org/10.5772/67740

15

Sustainable development is a process of change in which the exploitation of resources, the direction of investment, the orientation of technological development and institutional change are made consistent with both future and present needs [25]. Mexico is a country committed to addressing sustainable development, as demonstrated by the actions undertaken over the last few years [26]. For instance, in 2001 the Mexican Congress approved the Law on Sustainable Rural Development, on which the sustainable rural development is defined as the improvement of social welfare and economic activities in the territory specified outside the urban centres considered in accordance with the applicable provisions, ensuring the permanent conservation of natural resources, biodiversity and ecosystem services in that territory. Another tangible result is the National Network for Sustainable Rural Development (RENDRUS) that promotes a series of annual meetings for exchanging and evaluating successful experiences between rural producers, seeking a process of collective learning at different levels. In this context, we consider the design of plausible simulation scenarios essential for a better diffusion of socio-environmental innovations through the RENDRUS network in order to improve the social welfare in rural areas, ensuring the permanent conservation of

Scenarios were introduced over 50 years ago firstly by Herbert Kahn as a means to overcome the limits of reductionist thinking in response to the difficulty of creating accurate forecasts. In scenarios, the notion of wholly predictable futures of a system is rejected and instead the alternative futures that explore the paths to each, emphasizing the need to attend to disruptive change as normal, are seen [27]. Plausibility-based scenarios described by Schoemaker [28] are useful approaches in situations characterized by increasing uncertainty and complexity. According to Peterson et al. [29], in order to build scenarios, it is important to specify what is known and unknown about the system´s dynamics then, the alternative ways that the system could evolve must be identified. After that, a set of scenarios is built, through which our current thinking about the system should be expanded. Following Ref. [29], the dynamics of scenarios must be plausible; neither nature nor the agents involved in the scenario should behave in implausible ways. The most important part of a scenario´s plausibility is likely to be the behaviour of agents. In this direction, the behaviour of small rural producers and their organizations, governmental institutions, academic institutions and the knowledge society could improve the diffusion of socio-environmental innovations through the RENDRUS network. Our simulation model considers that agent´s behaviour is influenced by the *rt* parameter (the random threshold for adoption). The *rt* indicates the popularity of the socio-environmental innovations. **Table 2**

**4.1. Designing plausible simulation scenarios for achieving sustainable rural** 

correctly implemented, free of errors.

**development in Mexico**

**4. Agent-based simulation model analysis**

natural resources, biodiversity and ecosystem in such areas.

shows the range of simulation parameters for plausible simulation scenarios.

**Figure 10.** An artificial world populated by 4500 heterogeneous agents, *ee* = 30, *sc* = 20.

**Figure 11.** An artificial world populated by 4500 heterogeneous agents, *ee* = 10, *sc* = 50.

social cohesion corresponds to much more time to the adoption. In this case, the innovation adoption is very sensitive to the social cohesion of the RENDRUS network.

#### **3.4. Agent-based simulation model verification**

As Wilensky and Rand [12] explains: simulation model verification is the process of making sure that the simulation model has been correctly implemented on a computer using simulation software. In Netlogo software, the compilation and execution processes that prepare the model to run happen behind the scenes and require no intervention by the user; however, syntax and other coding errors that cause runtime errors during compilation and execution process interrupt the simulation [24]. In this phase, we eliminated the 'bugs' from the code so the model was correctly implemented, free of errors.

## **4. Agent-based simulation model analysis**

social cohesion corresponds to much more time to the adoption. In this case, the innovation

As Wilensky and Rand [12] explains: simulation model verification is the process of making sure that the simulation model has been correctly implemented on a computer using simulation software. In Netlogo software, the compilation and execution processes that prepare the model to run happen behind the scenes and require no intervention by the user; however, syntax and

adoption is very sensitive to the social cohesion of the RENDRUS network.

**Figure 11.** An artificial world populated by 4500 heterogeneous agents, *ee* = 10, *sc* = 50.

**Figure 10.** An artificial world populated by 4500 heterogeneous agents, *ee* = 30, *sc* = 20.

14 Computer Simulation

**3.4. Agent-based simulation model verification**

## **4.1. Designing plausible simulation scenarios for achieving sustainable rural development in Mexico**

Sustainable development is a process of change in which the exploitation of resources, the direction of investment, the orientation of technological development and institutional change are made consistent with both future and present needs [25]. Mexico is a country committed to addressing sustainable development, as demonstrated by the actions undertaken over the last few years [26]. For instance, in 2001 the Mexican Congress approved the Law on Sustainable Rural Development, on which the sustainable rural development is defined as the improvement of social welfare and economic activities in the territory specified outside the urban centres considered in accordance with the applicable provisions, ensuring the permanent conservation of natural resources, biodiversity and ecosystem services in that territory. Another tangible result is the National Network for Sustainable Rural Development (RENDRUS) that promotes a series of annual meetings for exchanging and evaluating successful experiences between rural producers, seeking a process of collective learning at different levels. In this context, we consider the design of plausible simulation scenarios essential for a better diffusion of socio-environmental innovations through the RENDRUS network in order to improve the social welfare in rural areas, ensuring the permanent conservation of natural resources, biodiversity and ecosystem in such areas.

Scenarios were introduced over 50 years ago firstly by Herbert Kahn as a means to overcome the limits of reductionist thinking in response to the difficulty of creating accurate forecasts. In scenarios, the notion of wholly predictable futures of a system is rejected and instead the alternative futures that explore the paths to each, emphasizing the need to attend to disruptive change as normal, are seen [27]. Plausibility-based scenarios described by Schoemaker [28] are useful approaches in situations characterized by increasing uncertainty and complexity. According to Peterson et al. [29], in order to build scenarios, it is important to specify what is known and unknown about the system´s dynamics then, the alternative ways that the system could evolve must be identified. After that, a set of scenarios is built, through which our current thinking about the system should be expanded. Following Ref. [29], the dynamics of scenarios must be plausible; neither nature nor the agents involved in the scenario should behave in implausible ways. The most important part of a scenario´s plausibility is likely to be the behaviour of agents. In this direction, the behaviour of small rural producers and their organizations, governmental institutions, academic institutions and the knowledge society could improve the diffusion of socio-environmental innovations through the RENDRUS network. Our simulation model considers that agent´s behaviour is influenced by the *rt* parameter (the random threshold for adoption). The *rt* indicates the popularity of the socio-environmental innovations. **Table 2** shows the range of simulation parameters for plausible simulation scenarios.


**5. Concluding remarks**

institutions.

Mexico

**References**

Sociology. 2010; **36** (4).

**Author details**

Aida Huerta Barrientos1,2\* and Yazmin Dillarza Andrade1

\*Address all correspondence to: aida.huerta@comunidad.unam.mx

1 Faculty of Engineering, National Autonomous University of Mexico, Mexico City, Mexico

[1] Sagarpa [Internet]. 2016. Available from: http://rendrus.extensionismo.mx/rendrus/

[3] Rogers, E M. Diffusion of innovations. London: Collier Macmillan Publishers; 1983.

[2] Quiroga C A, Barrera Gaytan J F. Evaluar la innovación socioambiental? In: Bello Baltazar E, Naranjo Piñera E J, Vandame R, editors. La otra innovación para el ambiente y la sociedad en la frontera sur de México. Mexico: El Colegio de la Frontera Sur; 2012.

[4] Viale R, Pozzali A. Complex adaptive systems and the evolutionary triple helix. Critical

2 The Complexity Sciences Center, National Autonomous University of Mexico, Mexico City,

The socio-environmental innovation is defined as a process of gradual social change through action research in localized territories, whose orientation is not only to give a creative answer to linked problems of rural development but also to generate learning that lead to the autonomy of the actors that is reflected in the collective benefit. The RENDRUS network is a governmental initiative to establish links between producers, their organizations and the knowledge society to generate sustainable rural development in Mexico. This chapter presented a novel agent-based simulation model of the diffusion of socio-environmental innovation in the RENDRUS network based on the CAS perspective in order to understand the diffusion process in the network. Modelling and simulating the socio-environmental innovation diffusion is important because it helps us understand the use of innovations that can support the sustainable rural development. From the simulation results, we observed on the one hand that more social cohesion among small rural producers and their organizations, governmental institutions, academic institutions and the knowledge society corresponds to less time to adopt socio-environmental innovations, and on the other hand that more popularity of socio-environmental innovations among them corresponds to also less time to adopt it. In conclusion, the diffusion of socio-environmental innovation to contribute to the sustainable rural development in Mexico depends upon a deep understanding of the interactions among rural producers and their organizations, governmental and academic

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation...

http://dx.doi.org/10.5772/67740

17

**Table 2.** The range of simulation parameters for plausible simulation scenarios.

#### **4.2. Analysis of plausible scenarios**

**Figure 12** illustrates the evolution of adopters over time. We observe that the proportion of adopters starts out fast forward then, grows exponentially and finally just in 15.6 days (ticks) saturates the artificial world.

As seen in **Figure 13**, the proportion of adopters starts out slow then, grows exponentially and finally just in 88.5 days (ticks) saturates the artificial world. Therefore, more popularity of socio-environmental innovations (*rt*) among small rural producers and their organizations, governmental institutions, academic institutions and the knowledge society corresponds to less time to adopt it by the RENDRUS network considered as a whole.

**Figure 12.** Artificial world populated by 4500 heterogeneous agents, *ee* = 10, *sc =* 50, *rt* ~ *N*(50, 15).

**Figure 13.** Artificial world populated by 4500 heterogeneous agents, *ee* = 10, *sc =* 50, *rt* ~ *N*(70, 15).

## **5. Concluding remarks**

The socio-environmental innovation is defined as a process of gradual social change through action research in localized territories, whose orientation is not only to give a creative answer to linked problems of rural development but also to generate learning that lead to the autonomy of the actors that is reflected in the collective benefit. The RENDRUS network is a governmental initiative to establish links between producers, their organizations and the knowledge society to generate sustainable rural development in Mexico. This chapter presented a novel agent-based simulation model of the diffusion of socio-environmental innovation in the RENDRUS network based on the CAS perspective in order to understand the diffusion process in the network. Modelling and simulating the socio-environmental innovation diffusion is important because it helps us understand the use of innovations that can support the sustainable rural development. From the simulation results, we observed on the one hand that more social cohesion among small rural producers and their organizations, governmental institutions, academic institutions and the knowledge society corresponds to less time to adopt socio-environmental innovations, and on the other hand that more popularity of socio-environmental innovations among them corresponds to also less time to adopt it. In conclusion, the diffusion of socio-environmental innovation to contribute to the sustainable rural development in Mexico depends upon a deep understanding of the interactions among rural producers and their organizations, governmental and academic institutions.

## **Author details**

**4.2. Analysis of plausible scenarios**

saturates the artificial world.

16 Computer Simulation

**Figure 12** illustrates the evolution of adopters over time. We observe that the proportion of adopters starts out fast forward then, grows exponentially and finally just in 15.6 days (ticks)

As seen in **Figure 13**, the proportion of adopters starts out slow then, grows exponentially and finally just in 88.5 days (ticks) saturates the artificial world. Therefore, more popularity of socio-environmental innovations (*rt*) among small rural producers and their organizations, governmental institutions, academic institutions and the knowledge society corresponds to

less time to adopt it by the RENDRUS network considered as a whole.

**Simulation parameter Lower value Upper value**

Endogenous effects (*ee*) 10 10 Social cohesion effects (*sc*) 50 50 Random threshold for adoption (*rt*) *N*(50, 15) *N*(70, 15)

**Table 2.** The range of simulation parameters for plausible simulation scenarios.

**Figure 12.** Artificial world populated by 4500 heterogeneous agents, *ee* = 10, *sc =* 50, *rt* ~ *N*(50, 15).

**Figure 13.** Artificial world populated by 4500 heterogeneous agents, *ee* = 10, *sc =* 50, *rt* ~ *N*(70, 15).

Aida Huerta Barrientos1,2\* and Yazmin Dillarza Andrade1

\*Address all correspondence to: aida.huerta@comunidad.unam.mx

1 Faculty of Engineering, National Autonomous University of Mexico, Mexico City, Mexico

2 The Complexity Sciences Center, National Autonomous University of Mexico, Mexico City, Mexico

## **References**


[5] Yilmaz L. Innovation systems are self-organizing complex adaptive systems. Association for the Advancement of Artificial Intelligence. 2008.

[22] Wilensky U. GasLab: An extensible modeling toolkit for exploring micro-and-macroviews of gases. In: Roberts N, Feurzeig W, Hunter B, editors. Computer modeling and

Modelling and Simulation of Complex Adaptive System: The Diffusion of Socio-Environmental Innovation...

http://dx.doi.org/10.5772/67740

19

[23] Banks J. Handbook of simulation: principles, methodology, advances, applications, and

[24] Stigberg D. An introduction to the Netlogo modeling environment. In: Westervelt J D, Cohen G L, editors. Ecologist-developed spatially explicit dynamic landscape models,

[25] World Commission on Environment and Development. Our common future. Oxford:

[26] Huerta-Barientos A, Lara-Rosano F. An initiative of Mexican Government towards sustainable rural development and innovation: The national network RENDRUS case. In: Lasker G E, Hiwaki K, editors. Sustainable development and global community. Ontario: The International Institute for Advances Studies in Systems Research and Cybernetics

[27] Wilkinson A, Kupers R, Mangalagiu D. How plausibility-based scenario practices are grappling with complexity to appreciate and address 21st century challenges.

[28] Schoemaker P J H. Multiple scenario development: its conceptual and behavioral foun-

[29] Peterson G D, Cumming G S, Carpenter S R. Scenario planning: a tool for conservation

simulation in science education. Berlin: Springer Verlag; 1999. p. 151–178.

practice. Georgia: John Wiley & Sons, Inc; 1998.

Oxford University Press; 1987.

(IIAS); 2016.

modeling dynamic systems. New York: Springer; 2012.

Technological Forecasting & Social Change. 2013; **80**: 699–710.

dation. Strategic Management Journal. 1993; **14** (3): 193–21.

in an uncertain world. Conservation Biology. 2003; **17** (2): 358–366.


[22] Wilensky U. GasLab: An extensible modeling toolkit for exploring micro-and-macroviews of gases. In: Roberts N, Feurzeig W, Hunter B, editors. Computer modeling and simulation in science education. Berlin: Springer Verlag; 1999. p. 151–178.

[5] Yilmaz L. Innovation systems are self-organizing complex adaptive systems. Association

[6] Rogers E M, Medina U E, Rivera M A, Wiley C J. Complex adaptive systems and the diffusion of innovations. The Public Sector Innovation Journal. 2005; **10** (3): 2–26.

[7] Schwarz N. Agent-based modeling of the diffusion of environmental innovations. An empirical approach. In: Proceedings of the 5th International EMAEE Conference on

[8] Sayama H. Introduction to the modeling and analysis of complex systems. New York:

[9] Holland J H. Complexity, a very short introduction. New York: Oxford University Press;

[10] Huerta-Barrientos A, Flores de la Mota I. Modeling sustainable supply chain management as a complex adaptive system: the emergence of cooperation. In: Krmac E, editor. Sustainable supply chain management. Croatia: InTech; 2016. DOI: 10.5772/62534 [11] Holland J H. Hidden order: how adaptation builds complexity. USA: Perseus Books;

[12] Wilensky U, Rand W. An introduction to agent-based modeling. Cambridge: The MIT

[13] Miller J H, Page S E. Complex adaptive systems. An introduction to computational models

[14] Bond A H, Gasser L. Readings in distributed artificial intelligence. Los Altos, CA, USA:

[15] Chaib-draa B, Moulin B, Mandiau R, Millot P. Trends in distributed artificial intelli-

[16] Wooldridge M, Jennings N R. Intelligent agents: theory and practice, Knowledge

[17] Macal C, North M. Introductory tutorial: agent-based modeling and simulation. In: Proceedings of the Winter Simulation Conference, December 2011. p. 1456–1468.

[18] Bass F M. A new product growth for model consumer durables. Management Science.

[19] Rossman G. The diffusion of the legitimate and the diffusion of legitimacy. Sociological

[20] Granovetter M. S. Threshold models of collective behavior. American Journal of

[21] Dorner D. The logic of failure: recognizing and avoiding error in complex situations.

of social life. Princeton: Princeton University Press; 2007.

gence. Artificial Intelligence Review. 1992; **6**: 35–66.

Engineering Review. 1995; **10**: 115–152.

for the Advancement of Artificial Intelligence. 2008.

Innovation; 17–19 May 2007.

Open SUNY Textbooks; 2015.

2014.

18 Computer Simulation

1995.

Press; 2015.

Morgan Kaufmann; 1988.

1969; **15**: 215–227.

Science. 2014; **1**: 49–69.

New York: Basic; 1997.

Sociology. 1978; **83**: 1420–1443.


**Chapter 2**

**Rendering Techniques in 3D Computer Graphics Based**

**on Changes in the Brightness of the Object Background**

Maintaining accurate colour constancy and constant colour appearance are only a few challenges one must conquer in a modern day digital three‐dimensional (3D) production. Many different factors influence the reproduction of colour in 3D rendering and one of the most important is certainly rendering engines. In our research, we have studied rendering of colours with three rendering engines (Blender Render, Cycles and Yafaray) of an open source 3D creation suite based on changes in the brightness of the object background from 20 to 80%. In one of these cases, colour of the object was adapted to the lighter background using the colour appearance model CIECAM02. With the analysis of colour differences, lightness and chroma between colours rendered using different rendering engines; we found out that rendering engines differently interpret colour, although the RGB values of colours and scene parameters were the same. Differences were particularly evident when rendering engine Cycles was used. However, Cycles also takes into account the object background. Numerical results of such research provide findings, which relate to the respective environment, and also these certainly demonstrate the successful imple‐ mentation of the colour appearance model CIECAM02 in the 3D technologies and, in our

**Keywords:** 3D computer graphics, rendering engines, Blender Render, Cycles, Yafaray,

The creation of static image in three‐dimensional (3D) computer graphic pipeline involves: object modelling, texturing, definition of materials and shading algorithms, illumination, cam‐ era setting and rendering. Exact algorithmic description and sampling of light (and consequently

> © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Nika Bratuž, Helena Gabrijelčič Tomc and

Additional information is available at the end of the chapter

opinion to other software packages for 3D computer graphics.

Dejana Javoršek

http://dx.doi.org/10.5772/67737

**Abstract**

CIECAM02

**1. Introduction**

## **Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background**

Nika Bratuž, Helena Gabrijelčič Tomc and Dejana Javoršek

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/67737

#### **Abstract**

Maintaining accurate colour constancy and constant colour appearance are only a few challenges one must conquer in a modern day digital three‐dimensional (3D) production. Many different factors influence the reproduction of colour in 3D rendering and one of the most important is certainly rendering engines. In our research, we have studied rendering of colours with three rendering engines (Blender Render, Cycles and Yafaray) of an open source 3D creation suite based on changes in the brightness of the object background from 20 to 80%. In one of these cases, colour of the object was adapted to the lighter background using the colour appearance model CIECAM02. With the analysis of colour differences, lightness and chroma between colours rendered using different rendering engines; we found out that rendering engines differently interpret colour, although the RGB values of colours and scene parameters were the same. Differences were particularly evident when rendering engine Cycles was used. However, Cycles also takes into account the object background. Numerical results of such research provide findings, which relate to the respective environment, and also these certainly demonstrate the successful imple‐ mentation of the colour appearance model CIECAM02 in the 3D technologies and, in our opinion to other software packages for 3D computer graphics.

**Keywords:** 3D computer graphics, rendering engines, Blender Render, Cycles, Yafaray, CIECAM02

## **1. Introduction**

The creation of static image in three‐dimensional (3D) computer graphic pipeline involves: object modelling, texturing, definition of materials and shading algorithms, illumination, cam‐ era setting and rendering. Exact algorithmic description and sampling of light (and consequently

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

colour) for calculation of final rendering are possible only with the consideration of the basis of radiometry. Radiometry presents the set of mathematic tools, rendering algorithms for descrip‐ tion of electromagnetic weaving and light phenomena [1]. The fundamentals of these algorithms are complex and multi‐layered; however, for visually accurate 3D CG (i.e. computer generated) imagery, the understanding of the basic reflectance models should be at least understood and implemented in the workflow. In general, the reflection of light can be described with two func‐ tions, i.e. BRDF―bi‐directional reflectance distribution function and BSSRDF―bi‐directional scattering‐surface reflectance distribution function [2–4].

The further developments in 3D rendering brought new solutions and advanced simula‐ tions (mathematical interpretations) of objects during rendering. Models named Oren‐Nayar, Torrence‐Sparrow, Blinn and models for anisotropic surfaces discuss the object surfaces as an organization of a large number of very small, differently oriented surfaces called microfacet. With the calculations of light phenomena on a large number of small diffuse surfaces, the Oren‐Nayar model [5] describes the partially diffuse material surface (which can include also some specular areas in dependence of incident light). On the contrary, Torrence‐Sparrow model [6] was developed for metallic surfaces with specific highlights and shadows. The calculations in this model are performed for a large number of completely specular (metal)

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

http://dx.doi.org/10.5772/67737

23

Blinn's approach to reflectance calculations involves the mathematical model including exponential averaging of normal vectors' disposition on small surfaces of 3D object. The exponential factors included in Blinn's model determine very rapid changes of normals of smooth object surfaces, while these changes are small for diffuse and relief surfaces [7]. The limitation of Blinn's model is the calculation of symmetrical reflection only, whereas as in nature and in 3D CG imagery many surfaces have asymmetrical light reflections. This has brought to the development of models for specific surfaces. Ashikhmin and Shirley [8] pre‐ sented the BRDF model for objects with anisotropic surfaces (polished metal, hair and cloth). In this model, the properties of a reflected light in a defined surface point are changed in

When considering mathematical description, models for specular and transmissive surfaces are in general simpler as above‐mentioned models. The cause of their simplicity can be found in the non‐complex interactions between light and material, both in geometrical and physical sense. In these models, both functions BRDF and BTDF calculate specular reflectance of the light rays so that the incident light on the surface is scattered in a specified angle (on totally specular surfaces the angle of incident light rays is identical to the angle of reflected light). When specular transmission occurs, the Sneller law and Fresnel equation are implemented in

Illumination in 3D space can be defined as direct and indirect (any process, which simulates indirect lighting, is also referred as global illumination). The principles of both types of illumi‐ nation and consequently their equations are different, so that for direct illumination only the illumination directly from light sources is taken into account. In contrast, during the integra‐ tion of indirect illumination, besides direct light sources, all the objects (background) in the scene are also considered as secondary light sources and different light interactions from all surfaces (materials) are performed and calculated. Rendering engines involve one or more often a set of rendering techniques, which algorithms translate all the data about 3D geometry,

During last decades, indirect illumination was a subject of various researches [11–15]. Each rendering engine uses its own combination of rendering algorithms and methods and with the implementation of different variations of the functions BRDF, BTDF and BSDF, includes

dependence of the rotation of observation around the defined point.

textures, materials, illumination and camera (observer) in 2D images.

also its own derivation of light transport equation (LTE) [10].

surfaces.

calculations [10].

The BRDF function was defined by researcher Nicodemus [2] half a century ago and today its application is also well anchored in modern 3D computer graphic solutions. In general terms and as well‐known from colourimetry, the mathematical abstraction of BRDF considers parameters of lights source, 3D model with defined textures and materials and the observer (virtual camera), therefore the function could be implemented on all types of 3D object surfaces. In dependence of the angle and the direction of the incident light, the function calculates the radiance value from the 3D object's surface in the observer's direction. Similarly as BRDF, BTDF―bi‐directional transmittance distribution function is also defined, calculating the por‐ tion and disposition of transmitted light. Based on mathematical foundations of both BRDF and BTDF functions, the BSDF―bi‐directional scattering distribution function for the light scattering phenomena on the surfaces and in the materials was also determined [4].

The requirements for achieving photorealistic renderings and description of all optical phenom‐ ena in 3D virtual space with different visualization technologies also demanded the development of the function BSSRDF [4]. With BSSRDF, the specific reflectance phenomena in translucent materials with higher portion of light scattering are described. Therefore, the so‐called sub‐ surface light transport defines the higher amount of light scattering between the starting point where the light is entering the material and (from the entrance's point of view) very distant exit point. With the implementation of this function, the issues of visualization of natural materials such as skin and wax were solved.

Reflectance models (shading algorithms) as the derivatives of the above‐mentioned functions represent the definition of type of interactions between material and light. Regarding their mathematical definition and results of their application on the objects, shading algorithms can be used as: (1) models for diffuse surfaces when they describe the surfaces with partial of total diffuse light reflectance; (2) models for surfaces with specific optical properties (metals, anisotropic materials); and (3) models for specular reflective and transitive surfaces, describing the total and partial specular reflectance and/or transmittance [5–8].

The most basic BRDF function is implemented in Lambert reflectance model, defining entirely diffuse surfaces. This model includes a lot of physical and mathematical simplification; however, it is still adequate for opaque and mat surfaces in CG visualizations [5].

Further, the Phong empirical model was developed with very basic level of consideration of lightning, observer (angle, distance) and normal direction for the calculations of reflected radi‐ ance at a surface point. Specular reflection in this specular model is calculated as an exponential function of a cosine function [9].

The further developments in 3D rendering brought new solutions and advanced simula‐ tions (mathematical interpretations) of objects during rendering. Models named Oren‐Nayar, Torrence‐Sparrow, Blinn and models for anisotropic surfaces discuss the object surfaces as an organization of a large number of very small, differently oriented surfaces called microfacet.

colour) for calculation of final rendering are possible only with the consideration of the basis of radiometry. Radiometry presents the set of mathematic tools, rendering algorithms for descrip‐ tion of electromagnetic weaving and light phenomena [1]. The fundamentals of these algorithms are complex and multi‐layered; however, for visually accurate 3D CG (i.e. computer generated) imagery, the understanding of the basic reflectance models should be at least understood and implemented in the workflow. In general, the reflection of light can be described with two func‐ tions, i.e. BRDF―bi‐directional reflectance distribution function and BSSRDF―bi‐directional

The BRDF function was defined by researcher Nicodemus [2] half a century ago and today its application is also well anchored in modern 3D computer graphic solutions. In general terms and as well‐known from colourimetry, the mathematical abstraction of BRDF considers parameters of lights source, 3D model with defined textures and materials and the observer (virtual camera), therefore the function could be implemented on all types of 3D object surfaces. In dependence of the angle and the direction of the incident light, the function calculates the radiance value from the 3D object's surface in the observer's direction. Similarly as BRDF, BTDF―bi‐directional transmittance distribution function is also defined, calculating the por‐ tion and disposition of transmitted light. Based on mathematical foundations of both BRDF and BTDF functions, the BSDF―bi‐directional scattering distribution function for the light

scattering phenomena on the surfaces and in the materials was also determined [4].

The requirements for achieving photorealistic renderings and description of all optical phenom‐ ena in 3D virtual space with different visualization technologies also demanded the development of the function BSSRDF [4]. With BSSRDF, the specific reflectance phenomena in translucent materials with higher portion of light scattering are described. Therefore, the so‐called sub‐ surface light transport defines the higher amount of light scattering between the starting point where the light is entering the material and (from the entrance's point of view) very distant exit point. With the implementation of this function, the issues of visualization of natural materials

Reflectance models (shading algorithms) as the derivatives of the above‐mentioned functions represent the definition of type of interactions between material and light. Regarding their mathematical definition and results of their application on the objects, shading algorithms can be used as: (1) models for diffuse surfaces when they describe the surfaces with partial of total diffuse light reflectance; (2) models for surfaces with specific optical properties (metals, anisotropic materials); and (3) models for specular reflective and transitive surfaces, describing

The most basic BRDF function is implemented in Lambert reflectance model, defining entirely diffuse surfaces. This model includes a lot of physical and mathematical simplification; however,

Further, the Phong empirical model was developed with very basic level of consideration of lightning, observer (angle, distance) and normal direction for the calculations of reflected radi‐ ance at a surface point. Specular reflection in this specular model is calculated as an exponential

the total and partial specular reflectance and/or transmittance [5–8].

it is still adequate for opaque and mat surfaces in CG visualizations [5].

scattering‐surface reflectance distribution function [2–4].

22 Computer Simulation

such as skin and wax were solved.

function of a cosine function [9].

With the calculations of light phenomena on a large number of small diffuse surfaces, the Oren‐Nayar model [5] describes the partially diffuse material surface (which can include also some specular areas in dependence of incident light). On the contrary, Torrence‐Sparrow model [6] was developed for metallic surfaces with specific highlights and shadows. The calculations in this model are performed for a large number of completely specular (metal) surfaces.

Blinn's approach to reflectance calculations involves the mathematical model including exponential averaging of normal vectors' disposition on small surfaces of 3D object. The exponential factors included in Blinn's model determine very rapid changes of normals of smooth object surfaces, while these changes are small for diffuse and relief surfaces [7]. The limitation of Blinn's model is the calculation of symmetrical reflection only, whereas as in nature and in 3D CG imagery many surfaces have asymmetrical light reflections. This has brought to the development of models for specific surfaces. Ashikhmin and Shirley [8] pre‐ sented the BRDF model for objects with anisotropic surfaces (polished metal, hair and cloth). In this model, the properties of a reflected light in a defined surface point are changed in dependence of the rotation of observation around the defined point.

When considering mathematical description, models for specular and transmissive surfaces are in general simpler as above‐mentioned models. The cause of their simplicity can be found in the non‐complex interactions between light and material, both in geometrical and physical sense. In these models, both functions BRDF and BTDF calculate specular reflectance of the light rays so that the incident light on the surface is scattered in a specified angle (on totally specular surfaces the angle of incident light rays is identical to the angle of reflected light). When specular transmission occurs, the Sneller law and Fresnel equation are implemented in calculations [10].

Illumination in 3D space can be defined as direct and indirect (any process, which simulates indirect lighting, is also referred as global illumination). The principles of both types of illumi‐ nation and consequently their equations are different, so that for direct illumination only the illumination directly from light sources is taken into account. In contrast, during the integra‐ tion of indirect illumination, besides direct light sources, all the objects (background) in the scene are also considered as secondary light sources and different light interactions from all surfaces (materials) are performed and calculated. Rendering engines involve one or more often a set of rendering techniques, which algorithms translate all the data about 3D geometry, textures, materials, illumination and camera (observer) in 2D images.

During last decades, indirect illumination was a subject of various researches [11–15]. Each rendering engine uses its own combination of rendering algorithms and methods and with the implementation of different variations of the functions BRDF, BTDF and BSDF, includes also its own derivation of light transport equation (LTE) [10].

Path tracing was introduced by the researcher Kajiya [11] and is still used in the modern solu‐ tions as a version of ''single‐direction'' path tracing or bi‐directional path tracing [16]. This technique is based on Monte‐Carlo equation for light transport. The paths of scattered light rays are generated with the gradual tracing from starting point in the camera and ending point in light sources. The basic parameter of this technique is path sampling that demand a very large number of samples for the quality and accuracy of image generation in one pixel. Rendering times are consequently very consuming. Namely, unsuitable number of samples usually results in rendering ''errors'', i.e. more often noise.

Namely, the results of many researches presented that the perception of colour is not depend‐ ing only on the observer, stimuli and light source but also on viewing conditions, media where the colour is observed (display, computer display, mobile phones) and specific condi‐ tions for each media (overexposure or glare) [23–25]. Besides, the studies demonstrated that cultural context and psychological aspects too have significant influence on the perception of colour [23, 26]. As a result, the application of colourimetry started to spread in different areas and many researches experimented various influences and conditions on colour perception

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

http://dx.doi.org/10.5772/67737

25

As a definition of the technical committee CIE TC1‐34, the colour appearance model is capable to predict perception properties of colour, such as lightness, chroma and hue [31]. Developed by International Commission on Illumination CIE, CIECAM97s was an important step to for‐ mation of uniform colour appearance model and is a foundation of actually used CIECAM02 [27, 30, 32, 33]. This simple colour appearance model performs the bi‐directional calculations on a large number of data bases and is also, due to its simple structure, practical and appli‐ cable on different areas [34, 35]. The model can be used for colour transforms [31], as a con‐ nection space in colour management [36, 37], for calculation of colour differences [38, 39], for colour rendering predictions depending on different illumination sources and for definition of metamerism [30, 40]. In our research, the CIECAM02 model was used for calculation of

In the last decade, the perception of colour and surface properties in 3D generated scenes were analysed with different methods and experiments. The studies of illumination and material influence on renderings revealed that observers perceive the colours that are reproduced in ren‐ derings differently as they are predicted by algorithms of various rendering methods [41–43]. Xiao and colleagues [41, 42] demonstrated that by different illumination conditions, there are differences in colour perception between graphical simulations of matte disks and specular spheres. Yang and colleagues [44, 45] presented an expanded study of the correlation between colour perception of surfaces and illumination cues. In these researches, colour constancy was depending on the number of light sources, especially in colour perception of highlights. Meanwhile, the perception of total surface specularity and the perception of the background were discovered not to be so relevant during the observations. In addition, other authors have analysed many aspects of colour constancy and colour perception in 2D and 3D scenes [46–49]. So far, in 3D computer‐generated imagery, only preliminary researches about the preservation of uniform colour perception of objects in different observation and illumination conditions were published [50, 51]. Therefore, the review of the references showed that the colour appear‐ ance model that would facilitate the prediction of perceptual colour properties as lightness,

In the presented research, we have studied rendering of colours with three rendering engines (Blender Render, Cycles and Yafaray) of an open source 3D creation suite based on changes in the brightness of the object background from 20 to 80%. In one of these cases, colour of the object was adapted to the lighter background using the colour appearance model CIECAM02. One of the main goals of the research was the implementation of the colour appearance model

[27–29], including the studies of the colour appearance models [30].

colour transforms during the colour changes of background of a defined object.

chroma and hue was still not implemented in 3D virtual space.

CIECAM02 in the 3D technologies.

Instant global illumination [13] implements the principle of tracing a lower number of light rays from the light source and re‐constructs a defined number of point light sources (so‐called virtual light sources) in the positions (points), where the path of rays intersect the scene object and background. In the further procedure, the integrator calculates the radiance of objects surface on the intersecting points, taking into account virtual light sources and laws of indi‐ rect illumination.

Beside above‐mentioned techniques, the methods of photon mapping and particles tracing are also frequently implemented in the work‐flow. These techniques were developed by Jansen [12]. These techniques also use simplifications in calculations with systematic distortions of statistical data during sampling. In rendering procedure, they introduce systematic error into the radiance approximation. The basic idea is to construct a path from lights, where every vertex on the path is treated as sample of illumination. The calculation of optical phenomena (reflection, refraction, transmission, scattering) is performed for every object's surface in the space, considering the optical and colour properties of all objects. The method is proceeded in two phases. In the first phase, the photon map is generated, whereas in the second the variation of ray tracing occurs.

The peak of the development in rendering methods is unbiased rendering techniques. Here, the simplifications and distortions in calculation of final rendered images are minimal [15]. Among these techniques, at least Monte Carlo light transport should be mentioned. The Monte Carlo's method involves bi‐directional path tracing of light rays, with the starting point in the observer (camera) and ending point in light source(s). The paths are processed with the modi‐ fication of paths of rays and with the consideration of indirect illumination also in the parts of the scene, which are excluded from calculations within the other rendering techniques [17].

Even though the general rendering algorithms are known, exact solutions that are imple‐ mented in software packages are not open source, neither and apart from Cornell box used for user's testing, there are no standardized method available for objective testing of renderings and visualizations [18]. In fact, considering opinions of the developers, the photorealism in CG imagery was already achieved. However, it must be noted that in some references visual per‐ ception of photorealism is actually considered from observers' point of view. Namely, some of the references state that photorealistic accuracy cannot be achieved without perceptual cues that yet remain unsolved [19, 20].

International Commission on Illumination CIE (fr. Commission Internationale de l'Eclairage) established the basis of colourimetry already in the beginning of twentieth century. Nowadays, these bases are also the fundamentals of different derivatives of numerical evaluation of colours [21]. Nevertheless, the accuracy of these fundamentals can be severely discussed [22]. Namely, the results of many researches presented that the perception of colour is not depend‐ ing only on the observer, stimuli and light source but also on viewing conditions, media where the colour is observed (display, computer display, mobile phones) and specific condi‐ tions for each media (overexposure or glare) [23–25]. Besides, the studies demonstrated that cultural context and psychological aspects too have significant influence on the perception of colour [23, 26]. As a result, the application of colourimetry started to spread in different areas and many researches experimented various influences and conditions on colour perception [27–29], including the studies of the colour appearance models [30].

Path tracing was introduced by the researcher Kajiya [11] and is still used in the modern solu‐ tions as a version of ''single‐direction'' path tracing or bi‐directional path tracing [16]. This technique is based on Monte‐Carlo equation for light transport. The paths of scattered light rays are generated with the gradual tracing from starting point in the camera and ending point in light sources. The basic parameter of this technique is path sampling that demand a very large number of samples for the quality and accuracy of image generation in one pixel. Rendering times are consequently very consuming. Namely, unsuitable number of samples

Instant global illumination [13] implements the principle of tracing a lower number of light rays from the light source and re‐constructs a defined number of point light sources (so‐called virtual light sources) in the positions (points), where the path of rays intersect the scene object and background. In the further procedure, the integrator calculates the radiance of objects surface on the intersecting points, taking into account virtual light sources and laws of indi‐

Beside above‐mentioned techniques, the methods of photon mapping and particles tracing are also frequently implemented in the work‐flow. These techniques were developed by Jansen [12]. These techniques also use simplifications in calculations with systematic distortions of statistical data during sampling. In rendering procedure, they introduce systematic error into the radiance approximation. The basic idea is to construct a path from lights, where every vertex on the path is treated as sample of illumination. The calculation of optical phenomena (reflection, refraction, transmission, scattering) is performed for every object's surface in the space, considering the optical and colour properties of all objects. The method is proceeded in two phases. In the first phase, the photon map is generated, whereas in the second the variation of ray tracing occurs. The peak of the development in rendering methods is unbiased rendering techniques. Here, the simplifications and distortions in calculation of final rendered images are minimal [15]. Among these techniques, at least Monte Carlo light transport should be mentioned. The Monte Carlo's method involves bi‐directional path tracing of light rays, with the starting point in the observer (camera) and ending point in light source(s). The paths are processed with the modi‐ fication of paths of rays and with the consideration of indirect illumination also in the parts of the scene, which are excluded from calculations within the other rendering techniques [17]. Even though the general rendering algorithms are known, exact solutions that are imple‐ mented in software packages are not open source, neither and apart from Cornell box used for user's testing, there are no standardized method available for objective testing of renderings and visualizations [18]. In fact, considering opinions of the developers, the photorealism in CG imagery was already achieved. However, it must be noted that in some references visual per‐ ception of photorealism is actually considered from observers' point of view. Namely, some of the references state that photorealistic accuracy cannot be achieved without perceptual cues

International Commission on Illumination CIE (fr. Commission Internationale de l'Eclairage) established the basis of colourimetry already in the beginning of twentieth century. Nowadays, these bases are also the fundamentals of different derivatives of numerical evaluation of colours [21]. Nevertheless, the accuracy of these fundamentals can be severely discussed [22].

usually results in rendering ''errors'', i.e. more often noise.

rect illumination.

24 Computer Simulation

that yet remain unsolved [19, 20].

As a definition of the technical committee CIE TC1‐34, the colour appearance model is capable to predict perception properties of colour, such as lightness, chroma and hue [31]. Developed by International Commission on Illumination CIE, CIECAM97s was an important step to for‐ mation of uniform colour appearance model and is a foundation of actually used CIECAM02 [27, 30, 32, 33]. This simple colour appearance model performs the bi‐directional calculations on a large number of data bases and is also, due to its simple structure, practical and appli‐ cable on different areas [34, 35]. The model can be used for colour transforms [31], as a con‐ nection space in colour management [36, 37], for calculation of colour differences [38, 39], for colour rendering predictions depending on different illumination sources and for definition of metamerism [30, 40]. In our research, the CIECAM02 model was used for calculation of colour transforms during the colour changes of background of a defined object.

In the last decade, the perception of colour and surface properties in 3D generated scenes were analysed with different methods and experiments. The studies of illumination and material influence on renderings revealed that observers perceive the colours that are reproduced in ren‐ derings differently as they are predicted by algorithms of various rendering methods [41–43].

Xiao and colleagues [41, 42] demonstrated that by different illumination conditions, there are differences in colour perception between graphical simulations of matte disks and specular spheres. Yang and colleagues [44, 45] presented an expanded study of the correlation between colour perception of surfaces and illumination cues. In these researches, colour constancy was depending on the number of light sources, especially in colour perception of highlights. Meanwhile, the perception of total surface specularity and the perception of the background were discovered not to be so relevant during the observations. In addition, other authors have analysed many aspects of colour constancy and colour perception in 2D and 3D scenes [46–49].

So far, in 3D computer‐generated imagery, only preliminary researches about the preservation of uniform colour perception of objects in different observation and illumination conditions were published [50, 51]. Therefore, the review of the references showed that the colour appear‐ ance model that would facilitate the prediction of perceptual colour properties as lightness, chroma and hue was still not implemented in 3D virtual space.

In the presented research, we have studied rendering of colours with three rendering engines (Blender Render, Cycles and Yafaray) of an open source 3D creation suite based on changes in the brightness of the object background from 20 to 80%. In one of these cases, colour of the object was adapted to the lighter background using the colour appearance model CIECAM02. One of the main goals of the research was the implementation of the colour appearance model CIECAM02 in the 3D technologies.

## **2. Experimental**

#### **2.1. Methods**

#### *2.1.1. Defining test setups*

To compare different rendering engines, a simple scene was setup in Blender open source 3D creation suite. Scene was composed from background, object, light source and camera (**Figure 1**). Three rendering engines were used, namely Blender Render, Cycles and Yafaray. Yafaray is an open source Monte Carlo ray tracing engine used for generating realistic images with metropolis ray tracing. Yafaray is used in form of add‐on and can generate realistic images using path trac‐ ing, bi‐directional path tracing and photon mapping. Blender Render is physically non‐objective rasterization engine that geometrically projects objects to an image plane without advance opti‐ cal effects. Cycles is objective, physically unbiased rendering engine that employs path‐tracing algorithm. Firstly, a simple grey chart was used to calibrate the scene, since different settings are applied to different rendering engines regarding light intensity. RGB values for each rendering engine were measured to provide repeatability among different rendering engines. Colour man‐ agement was turned off to achieve accurate RGB values. In Blender software, colour management is not entirely equivalent to colour management used in other professional graphic applications.

of anti‐aliasing samples per pixel was set to 8 with Gaussian reconstruction filter, all shading options were on, including ray tracing, tile size was set to 64 × 64 units. Rendering was carried

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

Input colours were in the range of RGB = [0, 0, 0] to RGB = [255, 255, 255] with interval of 25.5 units per channel, adding up to 1331 samples. Background was defined as 20 and 80% lightness, meaning RGB20 = [51, 51, 51] for 20% lightness and RGB80 = [204, 204, 204] for 80% lightness. In **Table 1**, important characteristics of each scene element are presented for all

Diffuse Lambert BSDF (Lambert in

Diffuse shading Lambert BSDF (Lambert in

Intensity 1 / 1 Specular shading Off Off Off Other effects Off Off Off

Intensity 1 4000 14

Colour management Off / /

**Table 1.** Rendering engine settings for object, background, light source and camera and general rendering settings.

**Background** Colour RGB20 (20% lightness) and RGB80 (80% lightness)

Shape Cone with 120° beam angle

PNG

Anti*‐*aliasing Gauss, 8 samples

Amount 1 / 1 Specular Off Off Off Other effects Off Off Off

**Blender Render Cycles Yafaray**

Oren‐Nayar)

Oren‐Nayar)

Lambert

http://dx.doi.org/10.5772/67737

27

Lambert

Material Surface Diffuse Glossy‐diffuse

Material Surface Diffuse Glossy‐diffuse

out by central processing unit (Intel i7 4770).

**Object** Colour 1331 colour samples

**Light source** Lamp Spot

**Camera** Focal length 35 mm

**Render settings** Dimensions 800 × 800 pixels

Colour White

Settings Auto

Colour space sRGB Depth 8 bit

rendering engines.

Background and object in **Figure 1** are composed of diffuse material with intensity 1 without specular of mirror component. Object was in the shape of a sphere. Diffuse shading model was set to Lambert shader for Blender Render and Yafaray. Cycles only supports BSDF that is composed of Lambert and Oren‐Nayar shading model. Camera with automatic settings was set in front of the object at the distance of 10 units and light source was set directly behind the camera. A reflector was set as white light and cone with 120° beam angle of the beam and constant fall‐off. Light intensity was changed with rendering engine to achieve repeatability and was 1 for Blender Render, 4000 for Cycles and 14 for Yafaray. Rendering engine settings were as follows: image size was set to 800 × 800 pixels in an 8‐bit sRGB colour space, amount

**Figure 1.** Schematic representation of scene setup in Blender software (A) and rendered image (B).

of anti‐aliasing samples per pixel was set to 8 with Gaussian reconstruction filter, all shading options were on, including ray tracing, tile size was set to 64 × 64 units. Rendering was carried out by central processing unit (Intel i7 4770).

**2. Experimental**

*2.1.1. Defining test setups*

To compare different rendering engines, a simple scene was setup in Blender open source 3D creation suite. Scene was composed from background, object, light source and camera (**Figure 1**). Three rendering engines were used, namely Blender Render, Cycles and Yafaray. Yafaray is an open source Monte Carlo ray tracing engine used for generating realistic images with metropolis ray tracing. Yafaray is used in form of add‐on and can generate realistic images using path trac‐ ing, bi‐directional path tracing and photon mapping. Blender Render is physically non‐objective rasterization engine that geometrically projects objects to an image plane without advance opti‐ cal effects. Cycles is objective, physically unbiased rendering engine that employs path‐tracing algorithm. Firstly, a simple grey chart was used to calibrate the scene, since different settings are applied to different rendering engines regarding light intensity. RGB values for each rendering engine were measured to provide repeatability among different rendering engines. Colour man‐ agement was turned off to achieve accurate RGB values. In Blender software, colour management is not entirely equivalent to colour management used in other professional graphic applications. Background and object in **Figure 1** are composed of diffuse material with intensity 1 without specular of mirror component. Object was in the shape of a sphere. Diffuse shading model was set to Lambert shader for Blender Render and Yafaray. Cycles only supports BSDF that is composed of Lambert and Oren‐Nayar shading model. Camera with automatic settings was set in front of the object at the distance of 10 units and light source was set directly behind the camera. A reflector was set as white light and cone with 120° beam angle of the beam and constant fall‐off. Light intensity was changed with rendering engine to achieve repeatability and was 1 for Blender Render, 4000 for Cycles and 14 for Yafaray. Rendering engine settings were as follows: image size was set to 800 × 800 pixels in an 8‐bit sRGB colour space, amount

**Figure 1.** Schematic representation of scene setup in Blender software (A) and rendered image (B).

**2.1. Methods**

26 Computer Simulation

Input colours were in the range of RGB = [0, 0, 0] to RGB = [255, 255, 255] with interval of 25.5 units per channel, adding up to 1331 samples. Background was defined as 20 and 80% lightness, meaning RGB20 = [51, 51, 51] for 20% lightness and RGB80 = [204, 204, 204] for 80% lightness. In **Table 1**, important characteristics of each scene element are presented for all rendering engines.


**Table 1.** Rendering engine settings for object, background, light source and camera and general rendering settings.

### *2.1.2. Colour adaptation with CIECAM02 colour appearance model*

Above‐mentioned input colours were imported to Blender software as input values RGBi on RGB20 background, as input values RGBi on RGB80 background and as adapted values RGBa on RGB80 background (**Figure 2**). This presents a set of images that will be later used for evaluation. Also, simultaneous contrast can be observed between input colours on RGB20 and RGB80 backgrounds.

calculated for each sample of input and adapted RGB values and graphically presented in afore‐ mentioned sets. Colour difference ΔE00 was calculated between pairs of values, namely between (1) input colour RGBi on RGB20 background and input colour RGBi on RGB80 background and (2) input colour RGBi on RGB20 background and adapted RGBa colour on RGB80 background.

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

http://dx.doi.org/10.5772/67737

29

Firstly, average colour difference ΔE00 between input colour RGBi on RGB20 background and input colour RGBi on RGB80 background (without adaptation) and input colour RGBi on RGB20 background and adapted RGBa colour on RGB80 background was calculated. Results are presented in **Table 2**. It was expected that colour difference would be zero between input colours RGBi on both backgrounds, since input value was actually the same. Despite the fact, there was slight difference between read values for Cycles rendering engine. It can be con‐ cluded that in that case, background slightly affects the rendered colour. Average colour dif‐ ference between input RGBi and adapted RGBa values was greater than zero considering that input value was adapted to different background, thus the colour is slightly changed. Smallest colour difference was obtained for Blender Render, followed by Yafaray. Greatest

Next, colour difference ΔE00 between values that were input into Blender and values that were obtained from rendered images (as lightest colour RGBs and average RGBp colour on spheri‐ cal object) was calculated for input colours on RGB20 and RGB80 background and adapted

It can be noted that colour difference between input and rendered colour on RGB20 and RGB80 remain roughly the same due to the fact that the input colour in the setting was the same between all pairs, which can be deducted from **Table 2**, where colour difference was zero for Blender Render and Yafaray, means background does not affect colour for those two

**Table 2.** Average colour difference ΔE00 between input colour RGBi on RGB20 background and input colour RGBi on RGB80 background (without adaptation) and input colour RGBi on RGB20 background and adapted RGBa colour on

**20–>80% 20–>80% CIECAM**

**3. Results and discussion**

colour difference was calculated for Cycles.

**Rendering engine Colour difference ΔE<sup>00</sup>**

rendering engines.

RGB80 background.

**3.1. Colour difference between input and adapted values**

**3.2. Colour difference between input and rendered values**

colours on RGB80 background. The results are presented in **Table 3**.

Blender Render (BR) 0 3.28 Cycles (CY) 1.67 6.29 Yafaray (YF) 0 4.42

Adapted colour was calculated with CIECAM02 colour appearance model as presented in **Figure 3**. From input RGBi values, XYZi values were calculated and used to calculate appear‐ ance correlates lightness J, chroma C and hue h, via reverse models adapted XYZa and RGBa were calculated. Following parameters were used in both directions: luminance of the adapt‐ ing field was set to *L*A = 16 cd/m<sup>2</sup> and white point to D65 and surroundings was set to aver‐ age. Relative luminance of the background was YB = 20% for input colours and *Y*B = 80% for adapted colours.

### *2.1.3. Evaluation*

Next to input RGBi values and adapted RGBa values, lightest colour RGBs and average colour RGBp on spherical object on rendered image were also obtained. CIELAB values were also

**Figure 2.** A set of images for colour RGBi = [51, 77, 179] for Cycles rendering engine. From left to right follow input colour RGB on RGB20 background, as input colour RGB on RGB80 background and as adapted RGB colour on RGB80 background.

**Figure 3.** Adaptation workflow, marking 'i' stands for input and marking 'a' for adapted colours.

calculated for each sample of input and adapted RGB values and graphically presented in afore‐ mentioned sets. Colour difference ΔE00 was calculated between pairs of values, namely between (1) input colour RGBi on RGB20 background and input colour RGBi on RGB80 background and (2) input colour RGBi on RGB20 background and adapted RGBa colour on RGB80 background.

## **3. Results and discussion**

*2.1.2. Colour adaptation with CIECAM02 colour appearance model*

and RGB80 backgrounds.

28 Computer Simulation

adapted colours.

*2.1.3. Evaluation*

background.

ing field was set to *L*A = 16 cd/m<sup>2</sup>

Above‐mentioned input colours were imported to Blender software as input values RGBi on RGB20 background, as input values RGBi on RGB80 background and as adapted values RGBa on RGB80 background (**Figure 2**). This presents a set of images that will be later used for evaluation. Also, simultaneous contrast can be observed between input colours on RGB20

Adapted colour was calculated with CIECAM02 colour appearance model as presented in **Figure 3**. From input RGBi values, XYZi values were calculated and used to calculate appear‐ ance correlates lightness J, chroma C and hue h, via reverse models adapted XYZa and RGBa were calculated. Following parameters were used in both directions: luminance of the adapt‐

age. Relative luminance of the background was YB = 20% for input colours and *Y*B = 80% for

Next to input RGBi values and adapted RGBa values, lightest colour RGBs and average colour RGBp on spherical object on rendered image were also obtained. CIELAB values were also

**Figure 2.** A set of images for colour RGBi = [51, 77, 179] for Cycles rendering engine. From left to right follow input colour RGB on RGB20 background, as input colour RGB on RGB80 background and as adapted RGB colour on RGB80

**Figure 3.** Adaptation workflow, marking 'i' stands for input and marking 'a' for adapted colours.

and white point to D65 and surroundings was set to aver‐

## **3.1. Colour difference between input and adapted values**

Firstly, average colour difference ΔE00 between input colour RGBi on RGB20 background and input colour RGBi on RGB80 background (without adaptation) and input colour RGBi on RGB20 background and adapted RGBa colour on RGB80 background was calculated. Results are presented in **Table 2**. It was expected that colour difference would be zero between input colours RGBi on both backgrounds, since input value was actually the same. Despite the fact, there was slight difference between read values for Cycles rendering engine. It can be con‐ cluded that in that case, background slightly affects the rendered colour. Average colour dif‐ ference between input RGBi and adapted RGBa values was greater than zero considering that input value was adapted to different background, thus the colour is slightly changed. Smallest colour difference was obtained for Blender Render, followed by Yafaray. Greatest colour difference was calculated for Cycles.

## **3.2. Colour difference between input and rendered values**

Next, colour difference ΔE00 between values that were input into Blender and values that were obtained from rendered images (as lightest colour RGBs and average RGBp colour on spheri‐ cal object) was calculated for input colours on RGB20 and RGB80 background and adapted colours on RGB80 background. The results are presented in **Table 3**.

It can be noted that colour difference between input and rendered colour on RGB20 and RGB80 remain roughly the same due to the fact that the input colour in the setting was the same between all pairs, which can be deducted from **Table 2**, where colour difference was zero for Blender Render and Yafaray, means background does not affect colour for those two rendering engines.


**Table 2.** Average colour difference ΔE00 between input colour RGBi on RGB20 background and input colour RGBi on RGB80 background (without adaptation) and input colour RGBi on RGB20 background and adapted RGBa colour on RGB80 background.


**Table 3.** Average colour difference ΔE00 between input and rendered colours for lightest RGBs and average colours RGBp.

Next, a difference between colour difference of lightest RGBs and average RGBp colour on the sphere can be noted. Values vary less than 10% for Cycles and Yafaray, but there is a great dif‐ ference between lightest and average colour for Blender Render, obviously there is no differ‐ ence between average RGBp input and rendered colour on both backgrounds. Also, notable difference is present when comparing lightest colour RGBs rendered with Cycles, confirming that Cycles does somehow takes background into account when rendering light colours.

In contrast, colour difference between adapted and rendered adapted colour is high. Colour difference is higher for average colour and lightest colour, which is consistent with non‐ adapted colours. Values here vary for more than 10%, most for Blender Render, which is quite opposite from non‐adapted colours. Presumably, adapted colour is treated differently than non‐adapted colour by rendering engines.

In **Figures 6** and **7**, a\* and b\* co‐ordinates for each sample for Blender Render are shown. In **Figure 6**, where a\* and b\* co‐ordinates for lightest RGBs colour are presented, it can be observed that colour space roughly matches sRGB gamut. All renderings took place in sRGB colour space. For adapted colours, there is condensation of samples along lines running from

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

http://dx.doi.org/10.5772/67737

31

**Figure 4.** Lightness L\* of lightest RGBs colour for each sample for Blender Render.

**Figure 5.** Lightness L\* of average RGBp colour for each sample for Blender Render.

**Figure 6.** a\* and b\* co‐ordinates of lightest RGBs colour for each sample for Blender Render.

the centre.

#### **3.3. CIELAB evaluation**

CIELAB colour values were calculated for all colours and lightness L\*, a\* and b\* co‐ordinates of lightest RGBs and average RGBp colours and presented graphically. Following from left to right, input colour RGBi on RGB20 background, input colour RGB on RGB80 background and adapted colour RGBa on RGB80 background are presented in each figure with lightness L\* on *y*‐axis and sample on *x*‐axis.

In **Figures 4** and **5**, a specific grouped pattern can be noted, which is created due to sample selection algorithm, where colours follow in batches from darkest to lightest. Charts for non‐ adapted colours remain the same but there is increase in lightness of adapted colours, which is a result of adaptation to darker background. This effect can be clearly seen in lower parts of the chart where darker colours resided. In **Figure 5**, samples are set lower on the chart, since lightness L\* of average colours is lower, but despite the fact, the effect of adaptation can be seen in darker colours.

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background http://dx.doi.org/10.5772/67737 31

**Figure 4.** Lightness L\* of lightest RGBs colour for each sample for Blender Render.

**Figure 5.** Lightness L\* of average RGBp colour for each sample for Blender Render.

Next, a difference between colour difference of lightest RGBs and average RGBp colour on the sphere can be noted. Values vary less than 10% for Cycles and Yafaray, but there is a great dif‐ ference between lightest and average colour for Blender Render, obviously there is no differ‐ ence between average RGBp input and rendered colour on both backgrounds. Also, notable difference is present when comparing lightest colour RGBs rendered with Cycles, confirming that Cycles does somehow takes background into account when rendering light colours.

**Table 3.** Average colour difference ΔE00 between input and rendered colours for lightest RGBs and average colours RGBp.

**Colour difference ΔE<sup>00</sup>** Input RGBi on RGB<sup>20</sup> background

RGB20 background

Cycles 10.34 10.85 28.98 Yafaray 9.11 9.11 34.62

Cycles 10.63 10.63 32.83 Yafaray 9.78 9.87 37.86

**Rendering engine** Rendered RGBi on

RGBs Blender Render 3.66 3.66 28.65

RGBp Blender Render 0 0 33.11

Input RGBi on RGB80 background

Rendered RGBi on RGB80 background Adapted RGBa on RGB80 background

Rendered adapted RGBa on RGB80 background

In contrast, colour difference between adapted and rendered adapted colour is high. Colour difference is higher for average colour and lightest colour, which is consistent with non‐ adapted colours. Values here vary for more than 10%, most for Blender Render, which is quite opposite from non‐adapted colours. Presumably, adapted colour is treated differently than

CIELAB colour values were calculated for all colours and lightness L\*, a\* and b\* co‐ordinates of lightest RGBs and average RGBp colours and presented graphically. Following from left to right, input colour RGBi on RGB20 background, input colour RGB on RGB80 background and adapted colour RGBa on RGB80 background are presented in each figure with lightness

In **Figures 4** and **5**, a specific grouped pattern can be noted, which is created due to sample selection algorithm, where colours follow in batches from darkest to lightest. Charts for non‐ adapted colours remain the same but there is increase in lightness of adapted colours, which is a result of adaptation to darker background. This effect can be clearly seen in lower parts of the chart where darker colours resided. In **Figure 5**, samples are set lower on the chart, since lightness L\* of average colours is lower, but despite the fact, the effect of adaptation can be

non‐adapted colour by rendering engines.

**3.3. CIELAB evaluation**

30 Computer Simulation

seen in darker colours.

L\* on *y*‐axis and sample on *x*‐axis.

In **Figures 6** and **7**, a\* and b\* co‐ordinates for each sample for Blender Render are shown. In **Figure 6**, where a\* and b\* co‐ordinates for lightest RGBs colour are presented, it can be observed that colour space roughly matches sRGB gamut. All renderings took place in sRGB colour space. For adapted colours, there is condensation of samples along lines running from the centre.

**Figure 6.** a\* and b\* co‐ordinates of lightest RGBs colour for each sample for Blender Render.

**Figure 7.** a\* and b\* co‐ordinates of average RGBp colour for each sample for Blender Render.

In **Figure 7**, where a\* and b\* co‐ordinates for average RGBp colour are presented, it can be observed that gamut is smaller than for lightest colours. Again, condensation can be visible for adapted colours.

In **Figure 8**, lightness L\* of lightest RGBs colour for each sample rendered with Cycles is shown. In comparison with Blender Render, samples are grouped in parts were lightness is higher, so the arrangement of samples is different. Lightness does not grow constantly as in case of Blender Render. There is a jump when RGB values go over 125. Here, samples are scat‐ tered in lighter regions, meanwhile for Blender Render, there is scattering in darker regions. Lightest colours have higher lightness L\* in comparison to Blender Render and CIECAM02 makes colours lighter to achieve constant colour appearance.

In **Figure 9**, lightness L\* of average RGBp colour for each sample for Cycles is presented. Lightness chart is similar as in case of Blender Render, but the jump in lightness for colour with RGB over 125 is still visible, but not to such extent. The effect of adaptation can still be visible.

Some outstanding phenomena can be observed in **Figure 10**, where a\* and b\* co‐ordinates of lightest RGBs colour for each sample for Cycles are presented. Adapted colours are grouped in a few groups along the lines emerging from the centre. In this case, largest colour difference was obtained in **Table 3**. In **Figure 11**, where a\* and b\* co‐ordinates of average RGBp colour for each sample for Cycles are shown, this anomaly cannot be observed to such extent. Based

on this observation and colour differences, it can be concluded that lighter colours are treated differently by Cycles than darker colours. Gamut of both charts still resembles that of sRGB

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

http://dx.doi.org/10.5772/67737

33

**Figure 9.** Lightness L\* of average RGBp colour for each sample for Cycles.

**Figure 10.** a\* and b\* co‐ordinates of lightest RGBs colour for each sample for Cycles.

**Figure 11.** a\* and b\* co‐ordinates of average RGBp colour for each sample for Cycles.

The results for Yafaray are presented in following figures. In **Figure 12**, lightness L\* of lightest RGBs colour for each sample for Yafaray is presented. If compared to Blender Render, there is again more scattering in lighter regions, same as for Cycles. In addition, the jump in lightness for colours with RGB higher than 125 can be visible too. Adaptation effect can also be visible,

colour space.

**Figure 8.** Lightness L\* of lightest RGBs colour for each sample for Cycles.

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background http://dx.doi.org/10.5772/67737 33

**Figure 9.** Lightness L\* of average RGBp colour for each sample for Cycles.

In **Figure 7**, where a\* and b\* co‐ordinates for average RGBp colour are presented, it can be observed that gamut is smaller than for lightest colours. Again, condensation can be visible

**Figure 7.** a\* and b\* co‐ordinates of average RGBp colour for each sample for Blender Render.

In **Figure 8**, lightness L\* of lightest RGBs colour for each sample rendered with Cycles is shown. In comparison with Blender Render, samples are grouped in parts were lightness is higher, so the arrangement of samples is different. Lightness does not grow constantly as in case of Blender Render. There is a jump when RGB values go over 125. Here, samples are scat‐ tered in lighter regions, meanwhile for Blender Render, there is scattering in darker regions. Lightest colours have higher lightness L\* in comparison to Blender Render and CIECAM02

In **Figure 9**, lightness L\* of average RGBp colour for each sample for Cycles is presented. Lightness chart is similar as in case of Blender Render, but the jump in lightness for colour with RGB over 125 is still visible, but not to such extent. The effect of adaptation can still be visible. Some outstanding phenomena can be observed in **Figure 10**, where a\* and b\* co‐ordinates of lightest RGBs colour for each sample for Cycles are presented. Adapted colours are grouped in a few groups along the lines emerging from the centre. In this case, largest colour difference was obtained in **Table 3**. In **Figure 11**, where a\* and b\* co‐ordinates of average RGBp colour for each sample for Cycles are shown, this anomaly cannot be observed to such extent. Based

makes colours lighter to achieve constant colour appearance.

**Figure 8.** Lightness L\* of lightest RGBs colour for each sample for Cycles.

for adapted colours.

32 Computer Simulation

**Figure 10.** a\* and b\* co‐ordinates of lightest RGBs colour for each sample for Cycles.

**Figure 11.** a\* and b\* co‐ordinates of average RGBp colour for each sample for Cycles.

on this observation and colour differences, it can be concluded that lighter colours are treated differently by Cycles than darker colours. Gamut of both charts still resembles that of sRGB colour space.

The results for Yafaray are presented in following figures. In **Figure 12**, lightness L\* of lightest RGBs colour for each sample for Yafaray is presented. If compared to Blender Render, there is again more scattering in lighter regions, same as for Cycles. In addition, the jump in lightness for colours with RGB higher than 125 can be visible too. Adaptation effect can also be visible,

**Figure 12.** Lightness L\* of lightest RGBs colour for each sample for Yafaray.

same as in previous instances. Lightness L\* of average RGBp colour obtained for each sample for Yafaray is similar to those of Cycles (**Figure 13**). Again, adaptation affects lightness of colours but not as notable as for lightest colours.

By comparing CIELAB values, it can be concluded that rendering engines do not treat all colours equally. Even though Cycles and Yafaray do not apply same shading algorithms, yet results are surprisingly similar. It was ascertained that colour differences are largest for Cycles and same fact was confirmed by the analysis of CIELAB values. Moreover, there is a

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

http://dx.doi.org/10.5772/67737

35

In **Figures 16**–**18**, the relationship between lightness of lightest colour L\*s in the image (on *x*‐axis) and average colour L\*p in the image (on *y*‐axis) is presented in sets for input colour on RGB20, input colour on RGB80 and adapted colour on RGB80. In **Figure 16**, this relation‐ ship is presented for Blender Render. It can be seen that this relationship is quite linear and roughly follows *y* = 0.7*x* function. Lightness L\*p is lower than L\*s, which was expected due to the fact that shading (therefore darkening) is applied on an object. It can be observed that this relationship is uniform for lighter colours; meanwhile grouping can be observed for darker colours. It can also be noted that when adaptation is carried out, colours shift towards lighter colours. This shift is most visible in the range of darker colour with L\*s = [0–10]. The same

In **Figure 17**, this relationship is presented for Cycles. It can be observed, that this relationship is not linear, colours are scattered and their position is depending on colour. Again, darker

**Figure 16.** Relationship between lightness of lightest colour L\*s in the image and average colour L\*p in the image for

notable influence of the background on rendered colour for Cycles.

**Figure 15.** a\* and b\* co‐ordinates of average RGBp colour for each sample for Yafaray.

phenomenon was observed previously when analysing CIELAB values.

**3.4. Relationship between L\*s and L\*p**

Blender Render.

In **Figure 14**, a\* and b\* co‐ordinates of lightest RGBs colour for each sample for Yafaray are presented. Again, gamut matches sRGB colour space. In comparison to Blender Render, sam‐ ples are condensed into groups for input colours too, meanwhile adapted colours are similar to Cycles, and even though it is not reflected in colour differences. In **Figure 15**, a\* and b\* co‐ordinates of average RGBp colour for each sample for Yafaray are shown. Again, there is difference between input and adapted colours but not as pronounced as in previous case.

**Figure 13.** Lightness L\* of average RGBp colour for each sample for Yafaray.

**Figure 14.** a\* and b\* co‐ordinates of lightest RGBs colour for each sample for Yafaray.

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background http://dx.doi.org/10.5772/67737 35

**Figure 15.** a\* and b\* co‐ordinates of average RGBp colour for each sample for Yafaray.

By comparing CIELAB values, it can be concluded that rendering engines do not treat all colours equally. Even though Cycles and Yafaray do not apply same shading algorithms, yet results are surprisingly similar. It was ascertained that colour differences are largest for Cycles and same fact was confirmed by the analysis of CIELAB values. Moreover, there is a notable influence of the background on rendered colour for Cycles.

#### **3.4. Relationship between L\*s and L\*p**

same as in previous instances. Lightness L\* of average RGBp colour obtained for each sample for Yafaray is similar to those of Cycles (**Figure 13**). Again, adaptation affects lightness of

In **Figure 14**, a\* and b\* co‐ordinates of lightest RGBs colour for each sample for Yafaray are presented. Again, gamut matches sRGB colour space. In comparison to Blender Render, sam‐ ples are condensed into groups for input colours too, meanwhile adapted colours are similar to Cycles, and even though it is not reflected in colour differences. In **Figure 15**, a\* and b\* co‐ordinates of average RGBp colour for each sample for Yafaray are shown. Again, there is difference between input and adapted colours but not as pronounced as in previous case.

colours but not as notable as for lightest colours.

34 Computer Simulation

**Figure 12.** Lightness L\* of lightest RGBs colour for each sample for Yafaray.

**Figure 13.** Lightness L\* of average RGBp colour for each sample for Yafaray.

**Figure 14.** a\* and b\* co‐ordinates of lightest RGBs colour for each sample for Yafaray.

In **Figures 16**–**18**, the relationship between lightness of lightest colour L\*s in the image (on *x*‐axis) and average colour L\*p in the image (on *y*‐axis) is presented in sets for input colour on RGB20, input colour on RGB80 and adapted colour on RGB80. In **Figure 16**, this relation‐ ship is presented for Blender Render. It can be seen that this relationship is quite linear and roughly follows *y* = 0.7*x* function. Lightness L\*p is lower than L\*s, which was expected due to the fact that shading (therefore darkening) is applied on an object. It can be observed that this relationship is uniform for lighter colours; meanwhile grouping can be observed for darker colours. It can also be noted that when adaptation is carried out, colours shift towards lighter colours. This shift is most visible in the range of darker colour with L\*s = [0–10]. The same phenomenon was observed previously when analysing CIELAB values.

In **Figure 17**, this relationship is presented for Cycles. It can be observed, that this relationship is not linear, colours are scattered and their position is depending on colour. Again, darker

**Figure 16.** Relationship between lightness of lightest colour L\*s in the image and average colour L\*p in the image for Blender Render.

rendering engines do consider background and/or surroundings when rendering image. This

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

http://dx.doi.org/10.5772/67737

37

In **Figures 19**–**21**, the relationship between chroma of lightest colour C\*s in the image (on *x*‐axis) and average colour C\*p in the image (on *y*‐axis) is presented in sets for input colour on RGB20, input colour on RGB80 and adapted colour on RGB80. In **Figure 19**, this relationship is presented for Blender Render and it can be observed that similarly to previous section is quite linear with some deviation in regions with lower chroma. Chart is even more uniform

In **Figure 20**, chroma relationship is presented for Cycles and it can be observed that the relationship is still quite linear for input colours but more spread meaning that Cycles does not treat colours based on chroma. After adaptation, chroma of darker colours decreases and

In **Figure 21**, this relationship is presented for Yafaray and again similarities with Cycles can be observed. Likewise Cycles, relationship remains linear for input colours but colours are spread along *y*‐axis. In contrast with Cycles, region with lower chroma C\*s < 10 is also covered

**Figure 19.** Relationship between chroma of lightest colour C\*s and the image and average colour C\*p in the image for

**Figure 20.** Relationship between chroma of lightest colour C\*s and the image and average colour C\*p in the image for

phenomenon was not observed for Yafaray when analysing CIELAB values.

chroma of lighter colours increases, again depending on original colour.

and after adaptation, chroma in darker regions increases.

**3.5. Relationship between C\*s and C\*p**

in terms of chroma after adaptation.

Blender Render.

Cycles.

**Figure 17.** Relationship between lightness of lightest colour L\*s in the image and average colour L\*p in the image for Cycles.

**Figure 18.** Relationship between lightness of lightest colour L\*s in the image and average colour L\*p in the image for Yafaray.

colours are grouped, meanwhile lighter colours scatter in groups depending on lightness and is most notable in range L\*s = [90–100]. Interestingly, despite contiguous growing of scatter‐ ing, there is quite uniform range of L\*s = [80–90]. For adapted colours, a colour shift is again visible, but in this case it is dependent on lightness. The distance of the shift depends on light‐ ness, this is visible in range of L\*s = [30–40]. In this range, lighter colours are shifted more than darker, which is contrary to previous conclusions.

In **Figure 18**, this relationship is presented for Yafaray and it can be observed that results are similar to those for Cycles and are in accordance with CIELAB values analysis. Only differ‐ ence is visible for lightest colours, they seem to be less scattered as for Cycles. Again, adapted colours shift towards lighter colours with some non‐linear shifts, similar to Cycles.

After analysing the relationship between lightness of lightest colour L\*s in the image and average colour L\*p in the image for all rendering engines, it can be concluded that Blender Render shades colours linearly, meanwhile the shading is more complex in case of Cycles and Yafaray and that it depends on individual colours or colour groups. For those two rendering engines, it was noted that the colours separate into groups. Shading of darker colours inclines towards linear, meanwhile shading pattern of lighter colours cannot be easily defined. A notable difference was perceived between input colours on lighter and darker background. Relationship was the same in case of Blender Render, meaning that it does not consider back‐ ground. In contrast, the relationship changed for Cycles and Yafaray, meaning that those two rendering engines do consider background and/or surroundings when rendering image. This phenomenon was not observed for Yafaray when analysing CIELAB values.

### **3.5. Relationship between C\*s and C\*p**

colours are grouped, meanwhile lighter colours scatter in groups depending on lightness and is most notable in range L\*s = [90–100]. Interestingly, despite contiguous growing of scatter‐ ing, there is quite uniform range of L\*s = [80–90]. For adapted colours, a colour shift is again visible, but in this case it is dependent on lightness. The distance of the shift depends on light‐ ness, this is visible in range of L\*s = [30–40]. In this range, lighter colours are shifted more than

**Figure 18.** Relationship between lightness of lightest colour L\*s in the image and average colour L\*p in the image for

**Figure 17.** Relationship between lightness of lightest colour L\*s in the image and average colour L\*p in the image for

In **Figure 18**, this relationship is presented for Yafaray and it can be observed that results are similar to those for Cycles and are in accordance with CIELAB values analysis. Only differ‐ ence is visible for lightest colours, they seem to be less scattered as for Cycles. Again, adapted

After analysing the relationship between lightness of lightest colour L\*s in the image and average colour L\*p in the image for all rendering engines, it can be concluded that Blender Render shades colours linearly, meanwhile the shading is more complex in case of Cycles and Yafaray and that it depends on individual colours or colour groups. For those two rendering engines, it was noted that the colours separate into groups. Shading of darker colours inclines towards linear, meanwhile shading pattern of lighter colours cannot be easily defined. A notable difference was perceived between input colours on lighter and darker background. Relationship was the same in case of Blender Render, meaning that it does not consider back‐ ground. In contrast, the relationship changed for Cycles and Yafaray, meaning that those two

colours shift towards lighter colours with some non‐linear shifts, similar to Cycles.

darker, which is contrary to previous conclusions.

Cycles.

36 Computer Simulation

Yafaray.

In **Figures 19**–**21**, the relationship between chroma of lightest colour C\*s in the image (on *x*‐axis) and average colour C\*p in the image (on *y*‐axis) is presented in sets for input colour on RGB20, input colour on RGB80 and adapted colour on RGB80. In **Figure 19**, this relationship is presented for Blender Render and it can be observed that similarly to previous section is quite linear with some deviation in regions with lower chroma. Chart is even more uniform in terms of chroma after adaptation.

In **Figure 20**, chroma relationship is presented for Cycles and it can be observed that the relationship is still quite linear for input colours but more spread meaning that Cycles does not treat colours based on chroma. After adaptation, chroma of darker colours decreases and chroma of lighter colours increases, again depending on original colour.

In **Figure 21**, this relationship is presented for Yafaray and again similarities with Cycles can be observed. Likewise Cycles, relationship remains linear for input colours but colours are spread along *y*‐axis. In contrast with Cycles, region with lower chroma C\*s < 10 is also covered and after adaptation, chroma in darker regions increases.

**Figure 19.** Relationship between chroma of lightest colour C\*s and the image and average colour C\*p in the image for Blender Render.

**Figure 20.** Relationship between chroma of lightest colour C\*s and the image and average colour C\*p in the image for Cycles.

In **Figure 23**, this relationship is presented for Cycles and compared to Blender Render, colours are differently grouped and cover wider range of lightness. There is a visible differ‐ ence for input colour on RGB20 and RGB80 background, same as previous analysis. Again, there is notable difference between input and adapted colours and there are larger changes

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

http://dx.doi.org/10.5772/67737

39

In **Figure 24**, this relationship is presented for Yafaray and results are again similar to those for Cycles, the difference for input colour on RGB20 and RGB80 background is only minimal.

From the results, it could be concluded that Blender Render shades colours linearly, mean‐ while shading for Cycles and Yafaray is more complex. Both rendering engines consider

**Figure 23.** Relationship between Rs and L\*p (red), Gs and L\*p (green) and Bs and L\*p (blue) in the image for Cycles.

The process of producing 2D image from 3D mathematically described that space is very com‐

**Figure 24.** Relationship between Rs and L\*p (red), Gs and L\*p (green) and Bs and L\*p (blue) in the image for Yafaray.

One of these factors is certainly rendering of colour, and not only in the field of 3D technolo‐ gies but also in the field of other computer graphics. Despite the progress and the availability of

plex, since it depends on many factors that cannot be completely influenced by user.

for darker colours when adapted.

when rendering colour.

**4. Conclusion**

Effects of adaptation are again similar to those for Cycles.

**Figure 21.** Relationship between chroma of lightest colour C\*s and the image and average colour C\*p in the image for Yafaray.

Again, Blender Render shades colour uniformly based on chrome unlike Cycles and Yafaray where shading is complex and depending on each colours. In comparison with lightness rela‐ tionship, colours were not divided in groups but spread more equally. Similarly, difference noted between input colours on lighter and darker background was again visible for Cycles and Yafaray, but not as much as lightness. It can be concluded that chroma affects shading too. Chroma of adapted colours is shifted but not in same directions as lightness and not in same way for all rendering engines.

#### **3.6. Relationship between Rs and L\*p, Gs and L\*p and Bs and L\*p**

Finally, relationship between each RGB component of lightest colour (Rs, Gs and Bs on *x*‐axis) and lightness of average colour L\*p (on *y*‐axis) was graphically presented. *z*‐axis was intro‐ duced to avoid overlapping. Charts are presented in sets for input colour on RGB20, input colour on RGB80 and adapted colour on RGB80.

In **Figure 22**, this relationship is presented for Blender Render. It can be observed that charts are the same for input colour on RGB20 and RGB80 background. The effect of adaptation is visible in terms of lightness and RGB values and that there are some anomalies in darker regions of R and B component which was observed in preliminary research. It can also be observed that CIECAM02 effects lightest colour only minimally.

**Figure 22.** Relationship between Rs and L\*p (red), Gs and L\*p (green), Bs and L\*p (blue) in the image for Blender Render.

In **Figure 23**, this relationship is presented for Cycles and compared to Blender Render, colours are differently grouped and cover wider range of lightness. There is a visible differ‐ ence for input colour on RGB20 and RGB80 background, same as previous analysis. Again, there is notable difference between input and adapted colours and there are larger changes for darker colours when adapted.

In **Figure 24**, this relationship is presented for Yafaray and results are again similar to those for Cycles, the difference for input colour on RGB20 and RGB80 background is only minimal. Effects of adaptation are again similar to those for Cycles.

From the results, it could be concluded that Blender Render shades colours linearly, mean‐ while shading for Cycles and Yafaray is more complex. Both rendering engines consider when rendering colour.

**Figure 23.** Relationship between Rs and L\*p (red), Gs and L\*p (green) and Bs and L\*p (blue) in the image for Cycles.

**Figure 24.** Relationship between Rs and L\*p (red), Gs and L\*p (green) and Bs and L\*p (blue) in the image for Yafaray.

## **4. Conclusion**

Again, Blender Render shades colour uniformly based on chrome unlike Cycles and Yafaray where shading is complex and depending on each colours. In comparison with lightness rela‐ tionship, colours were not divided in groups but spread more equally. Similarly, difference noted between input colours on lighter and darker background was again visible for Cycles and Yafaray, but not as much as lightness. It can be concluded that chroma affects shading too. Chroma of adapted colours is shifted but not in same directions as lightness and not in

**Figure 21.** Relationship between chroma of lightest colour C\*s and the image and average colour C\*p in the image for

Finally, relationship between each RGB component of lightest colour (Rs, Gs and Bs on *x*‐axis) and lightness of average colour L\*p (on *y*‐axis) was graphically presented. *z*‐axis was intro‐ duced to avoid overlapping. Charts are presented in sets for input colour on RGB20, input

In **Figure 22**, this relationship is presented for Blender Render. It can be observed that charts are the same for input colour on RGB20 and RGB80 background. The effect of adaptation is visible in terms of lightness and RGB values and that there are some anomalies in darker regions of R and B component which was observed in preliminary research. It can also be

**Figure 22.** Relationship between Rs and L\*p (red), Gs and L\*p (green), Bs and L\*p (blue) in the image for Blender Render.

same way for all rendering engines.

Yafaray.

38 Computer Simulation

colour on RGB80 and adapted colour on RGB80.

**3.6. Relationship between Rs and L\*p, Gs and L\*p and Bs and L\*p**

observed that CIECAM02 effects lightest colour only minimally.

The process of producing 2D image from 3D mathematically described that space is very com‐ plex, since it depends on many factors that cannot be completely influenced by user.

One of these factors is certainly rendering of colour, and not only in the field of 3D technolo‐ gies but also in the field of other computer graphics. Despite the progress and the availability of information and knowledge in this field, there is still no clear answer about colour perception, since this phenomenon is, besides physical factors also dependent on some other factors. With the emergence of new complex algorithms for rendering and visualization, the problem of colour reproduction has increased and, despite the established methods there are still no universal solu‐ tions to ensure constant colour appearance. Although the colour appearance models, more spe‐ cifically CIECAM02, have been in use for many years, it has yet not been analysed and explored to ensure constant colour appearance in the field of computer graphics. Demonstration of suc‐ cessful implementation of the model CIECAM02 was possible only empirically, so it was neces‐ sary to determine how algorithms interpret colour during simulation and reproduction and also how rendering engines interpret colour on a 2D rendered image after setting the 3D scene.

**References**

House Publisher; 2014. 240 p.

JOSA.57.001105

Optics. 1965;**4**(7):757–776. DOI: 10.1364/AO.4.000767

Huntsville; 1980. pp. 154–160. DOI: 10.1117/12.959611

2009. 2009;**28**(3):1–10. DOI: 10.1145/1531326.1531336

ACM; 1994. pp. 239–246. DOI: 10.1145/192161.192213

2000;**5**(2):25–32. DOI: 10.1080/10867651.2000.10487522

1975;**18**(6):311–317. DOI: 10.1145/360825.360839

2nd ed. Burlington: Morgan Kaufman; 2010. 1200 p.

Efforts That Shaped The Field; New York, NY, USA: ACM; 1998.

Techniques 96; June 7–19 1996; Porto, Portugal: Springer; pp. 21–30.

pp. 192–198. DOI: 10.1145/965141.563893

[1] McCluney R. Introduction to Radiometry and Photometry. 2nd ed. Norwood: Artech

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

http://dx.doi.org/10.5772/67737

41

[2] Nicodemus FE. Directional reflectance and emissivity of an opaque surface. Applied

[3] Bartell FO, Dereniak EL, Wolfe WL. The theory and measurement of bidirectional reflec‐ tance distribution function (BRDF) and bidirectional transmittance distribution function (BTDF). In: Proceedings of SPIE, Radiation Scattering in Optical Systems, Hunt GH, editor.

[4] Donner C, Lawrence J, Ramamoorthi R, Hachisuka T, Jensen HW, Nayar S. An empirical BSSRDF model. ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH

[5] Nayar M, Oren SK. Generalization of Lambert's reflectance model. Petrovich L, Tanaka K, editors. In SIGGRAPH 94 21st International ACM Conference on Computer Graphics and Interactive Techniques; July 24–29 1994; Orlando, Florida, USA. New York, NY, USA:

[6] Torrance KE, Sparrow EM. Theory for off‐specular reflection from roughened sur‐ faces. Journal of the Optical Society of America. 1967;**57**(9):1105–1114. DOI: 10.1364/

[7] Blinn JF. Models of light reflection for computer synthesized pictures. In: SIGGRAPH '77 Proceedings of the 4th annual conference on Computer graphics and interactive tech‐ niques; July 20–22 1977; San Jose, California, USA. New York, NY, USA: ACM; 1977.

[8] Ashikhmin M, Shirley P. An anisotropic Phong BRDF model. Journal of Graphics Tools.

[9] Phong BT. Illumination for computer generated pictures. Communications of the ACM.

[10] Pharr M, Humphreys G. Physically based rendering: from theory to implementation.

[11] Kajiya J. The Rendering Equation. Wolfe R, editor. In: Seminal Graphics: Pioneering

[12] Jansen HW. A Practical Guide to Global Illumination using Photon Maps. Pueyo X, Schröder P, editors. In: Proceedings of the Eurographics Workshop on Rendering

[13] Wald I, Kollig T, Benthin C, Keller A, Slusallek P, Debevec P, Gibson S, editors. Interactive global illumination using fast ray tracing. EGRW '02 Proceedings of the 13th

Eurographics workshop on Rendering; June 26–28 2002; Pisa: Italy; pp. 15–24.

For that purpose, in our research, we have studied rendering of colours with three rendering engines (Blender Render, Cycles and Yafaray) of an open source software Blender based on changes in the lightness of the object background from 20 to 80%. In one of the cases, colour of the object was adapted to lighter background using the colour appearance model CIECAM02.

With the analysis of colour differences, lightness and chroma between colours rendered using different rendering engines, we found out that rendering engines differently interpret colour, although RGB values of colours and scene parameters were the same. Differences were particu‐ larly evident when rendering engine Cycles was used. Results showed that Blender Render treats colour more linearly than other two more advanced rendering engines, Cycles and Yafaray, at least in case of lightness and chroma. In the case of Cycles and Yafaray, colours and change of their properties are considered non‐linear, while shading varies depending on the colour and its properties. In addition, we found out that especially Cycles, due to the using of indirect lighting in the colour renderings, also takes into account the object background, since results were differ‐ ent when the background was changed without any adjustments to the colour.

Implementation of the CIECAM02 model in the colour rendering workflow of used render‐ ing engines was successful, since its use in all three cases resulted generally in a better visual match (smaller colour differences) between pairs of stimuli at a controlled change in defined parameters. We have also found out that the quality and quantity of maintaining colour appearance depends on the principle of rendering engines operation and on lightness and chroma of a rendered colour.

The possibilities for further research to in‐depth understanding of rendering engines and shad‐ ing's influence on colour are possible in terms of integration of larger number of objects and back‐ grounds colours, and also the inclusion of visual assessment with a larger number of observers.

## **Author details**

Nika Bratuž, Helena Gabrijelčič Tomc\* and Dejana Javoršek

\*Address all correspondence to: helena.gabrijelcic@ntf.uni‐lj.si

Department of Textiles, Graphic Arts and Design, Faculty of Natural Sciences and Engineering, Chair of Information and Graphic Arts Technology, University of Ljubljana, Ljubljana, Slovenia

## **References**

information and knowledge in this field, there is still no clear answer about colour perception, since this phenomenon is, besides physical factors also dependent on some other factors. With the emergence of new complex algorithms for rendering and visualization, the problem of colour reproduction has increased and, despite the established methods there are still no universal solu‐ tions to ensure constant colour appearance. Although the colour appearance models, more spe‐ cifically CIECAM02, have been in use for many years, it has yet not been analysed and explored to ensure constant colour appearance in the field of computer graphics. Demonstration of suc‐ cessful implementation of the model CIECAM02 was possible only empirically, so it was neces‐ sary to determine how algorithms interpret colour during simulation and reproduction and also how rendering engines interpret colour on a 2D rendered image after setting the 3D scene.

For that purpose, in our research, we have studied rendering of colours with three rendering engines (Blender Render, Cycles and Yafaray) of an open source software Blender based on changes in the lightness of the object background from 20 to 80%. In one of the cases, colour of the object was adapted to lighter background using the colour appearance model CIECAM02. With the analysis of colour differences, lightness and chroma between colours rendered using different rendering engines, we found out that rendering engines differently interpret colour, although RGB values of colours and scene parameters were the same. Differences were particu‐ larly evident when rendering engine Cycles was used. Results showed that Blender Render treats colour more linearly than other two more advanced rendering engines, Cycles and Yafaray, at least in case of lightness and chroma. In the case of Cycles and Yafaray, colours and change of their properties are considered non‐linear, while shading varies depending on the colour and its properties. In addition, we found out that especially Cycles, due to the using of indirect lighting in the colour renderings, also takes into account the object background, since results were differ‐

ent when the background was changed without any adjustments to the colour.

Nika Bratuž, Helena Gabrijelčič Tomc\* and Dejana Javoršek

\*Address all correspondence to: helena.gabrijelcic@ntf.uni‐lj.si

chroma of a rendered colour.

**Author details**

40 Computer Simulation

Implementation of the CIECAM02 model in the colour rendering workflow of used render‐ ing engines was successful, since its use in all three cases resulted generally in a better visual match (smaller colour differences) between pairs of stimuli at a controlled change in defined parameters. We have also found out that the quality and quantity of maintaining colour appearance depends on the principle of rendering engines operation and on lightness and

The possibilities for further research to in‐depth understanding of rendering engines and shad‐ ing's influence on colour are possible in terms of integration of larger number of objects and back‐ grounds colours, and also the inclusion of visual assessment with a larger number of observers.

Department of Textiles, Graphic Arts and Design, Faculty of Natural Sciences and Engineering, Chair of Information and Graphic Arts Technology, University of Ljubljana, Ljubljana, Slovenia


[14] Kurt M, Edwards D. A survey of BRDF models for computer graphics. ACM SIGGRAPH Computer Graphics—Building Bridges—Science, the Arts & Technology. 2009;**43**(2). DOI: 10.1145/1629216.1629222

Technologies, and Applications; November 7–10 2005; Scottsdale, Arizona, USA. Society

Rendering Techniques in 3D Computer Graphics Based on Changes in the Brightness of the Object Background

http://dx.doi.org/10.5772/67737

43

[29] Xiao K, Wuerger S, Fu C, Karatzas D. Unique hue data for colour appearance models. Part II. Color research and application. 2013;**38**(1):22–29. DOI: 10.1002/col.20725

[30] Fairchild M. Color Appearance Models. 3th ed. Chichester: John Wiley and sons; 2013.

[31] Bratuž N, Javoršek A, Javoršek, D. Barvne pretvorbe v CIECAM02 in CIELAB. Colour

[32] Luo MR, Hunt RWG. The structure of the CIE 1997 colour appearance model (CIECAM97s). Color research and application, 1998;**23**(3):138–144. DOI: 10.1002/

[33] Fairchild M. Status of CIE color appearance models. Chung R, Rodrigues A, editors. In: Proceedings. SPIE 4421, 9th Congress of the International Colour Association, 550; April

[34] Li C, Luo MR, Hunt RWG, Moroney N, Fairchild MD, Newman T. The performance of CIECAM02. Chung R, Rodrigues A, editors. In: 10th Color Imaging Conference: Color Science and Engineering Systems, Technologies, and Applications; 2002; Scottsdale, AZ,

[35] Brill MH, Mahy M. Visualization of mathematical inconsistencies in CIECAM02. Color

[36] Specification ICC.1:2010: Image technology colour management – architecture, profile format, and data structure. Reston, VA, USA: International Color Consortium, 2010. 113 p.

[37] Tastl I, Bhachech M, Moroney N, Holm J. ICC Color Management and CIECAM02 [Internet]. 2005 [Updated: 2005]. Available from: http://www.hpl.hp.com/news/2005/

[38] Melgosa M, Trémeau A, Cui G. Chapter 3: Colour Difference Evaluation. In: editor, Advanced Color Image Processing and Analysis. New York: Springer Science+Business

[39] Windows Color System [Internet]. 2014 [Updated: 2014]. Available from: https://msdn. microsoft.com/en‐us/library/dd372446(v=vs.85).aspx [Accessed: 13. 9. 2016]

[40] Guay R, Shaw MQ. Dealing with Imaginary Color Encodings in CIECAM02 in an ICC Workflow. Chung R, Rodrigues A, editors. In: 13th Color Imaging Conference: Color Science and Engineering Systems, Technologies, and Applications; 2005; Scottsdale, AZ,

[41] Xiao B, Brainard DH. Surface gloss and color perception of 3D objects. Visual Neuroscience,

[42] Xiao B, Hurst B, MacIntyre L, Brainard DH. The color constancy of three‐dimensional

Transforms in CIECAM02 and CIELAB. Tekstilec. 2013;**56**(3):222–229.

for Imaging Science and Technology; 2005. pp. 75–78.

(SICI)1520‐6378(199806)23:3<138::AID‐COL5>3.0.CO;2‐R

2 2001; Rochester, NY, USA: SPIE; 2002. DOI: 10.1117/12.464726

USA: The International Society for Optics and photonics; 2002.

oct‐dec/CIC05\_CIECAMICC\_final.pdf [Accessed: 12. 9, 2016]

USA: The international society for optics and photonics; 2005.

objects. Journal of Vision, 2012;**12**(4):1–15. DOI: 10.1167/12.4.6

2008;**25**(3):371–385. DOI: 10.1017/S0952523808080267

Media; 2013. pp. 59–78.

research and application, 2012;**38**(3):188–195. DOI: 10.1002/col.20744

385 p.


Technologies, and Applications; November 7–10 2005; Scottsdale, Arizona, USA. Society for Imaging Science and Technology; 2005. pp. 75–78.

[29] Xiao K, Wuerger S, Fu C, Karatzas D. Unique hue data for colour appearance models. Part II. Color research and application. 2013;**38**(1):22–29. DOI: 10.1002/col.20725

[14] Kurt M, Edwards D. A survey of BRDF models for computer graphics. ACM SIGGRAPH Computer Graphics—Building Bridges—Science, the Arts & Technology. 2009;**43**(2).

[15] Kouhzadi A, Rasti A, Rasti D, Why NK. Unbiased Monte Carlo Rendering – A Comparison Study. International Journal of Scientific Knowledge. 2013;**2**(3):1–10. [16] Lafortune, EP, Willems YD. Bi‐directional Path Tracing. Santo HP, editor. In: Third International Conference on Computational Graphics and Visualization Techniques; December 5–10 1993; Alvor, Algarve, Portugal. Lisbon, Portugal: Association for

[17] Veach E, Guibas LJ. Metropolis light transport. Owen SG, Whitted T, Mones‐Hattal B, editors. In Proceedings of the 24th annual conference on Computer graphics and inter‐ active technique; August 3–8 1997; Los Angeles, CA, USA. New York, NY, USA: ACM

[18] Vidmar Ž, Gabrijelčič Tomc H, Hladnik A. Performance Assessment of Three Rendering Engines in 3D Computer Graphics Software. Acta Graphica. 2014;**25**(3–4):101–114. [19] Ferwerda JA. Three varieties of realism in Computer graphic. Rogowitz BE, Pappas TN, editors. In: Proc. SPIE 5007, Human Vision and Electronic Imaging VIII, 290; August 20

[20] McNamara, A. Visual perception in realistic image synthesis. Computer Graphics forum.

[21] CIE technical report: Colorimetry. 3st ed. Vienna, Austria: Commission Internationale de

[22] Fairman HS, Brill MH. Hemmendinger H. How the CIE 1931 color‐matching functions were derived from Wright‐Guild data. Color Research and Application. 1997;**22**(1):11–23.

[23] Hurlbert A. Colour vision: putting in context. Current Biology. 1996;**6**(11):1381–1384.

[24] Winawer J, Witthoft N, Frank MC, Wu L, Wade AR, Boroditsky L. Russian blues reveal effects of language on color discrimination. Proceeding of National Academy of Science of United States of America. 2007;**104**(19):7780–7785. DOI: 10.1073/pnas.0701644104 [25] Hunt RWG. The reproduction of colour. 6th ed. Chichester: John Wiley and sons; 2004.

[26] Trstenjak A. Psihologija barv. Ljubljana: Inštitut Antona Trstenjaka za psihologijo,

[27] Javoršek A. Preizkus modela barvnega zaznavanja CIECAM97s v standarnih pogojih

[28] Süsstrunk S E. and Finlayson G D. Evaluating chromatic adaptation transform perfor‐ mance. In: Thirteenth Color Imaging Conference Color Science and Engineering Systems,

Computing Machinery, Portuguese ACM Chapter; 1993. pp. 145–153.

Press/Addison‐Wesley Publishing Co.; 1997. pp. 65–76.

2001;**20**(4):211–224. DOI: 10.1111/1467‐8659.00550

l'Eclairage; 2004. 72 p.

702 p.

DOI: 10.1016/S0960‐9822(96)00736‐1

logoterapijo in antropohigieno; 1996. 494 p.

[thesis]. Ljubljana: University of Ljubljana; 2004. 38 p.

2003; Santa Clara, CA, USA: SPIE; 2003. DOI: 10.1117/12.473899

DOI: 10.1002/(SICI)1520‐6378(199702)22:1<11::AID‐COL4>3.0.CO;2–7

DOI: 10.1145/1629216.1629222

42 Computer Simulation


[43] Fleming RW, Dror RO, Adelson EH. Real‐world illumination and the perception of surface reflectance properties. Journal of Vision, 2003;**3**(5):347–368. DOI: 10.1167/3.5.3

**Chapter 3**

**Modelling and Visualisation of the Optical Properties**

Cloth and garment visualisations are widely used in fashion and interior design, entertaining, automotive and nautical industry and are indispensable elements of visual communication. Modern appearance models attempt to offer a complete solution for the visualisation of complex cloth properties. In the review part of the chapter, advanced methods that enable visualisation at micron resolution, methods used in three-dimensional (3D) visualisation workflow and methods used for research purposes are presented. Within the review, those methods offering a comprehensive approach and experiments on explicit clothes attributes that present specific optical phenomenon are analysed. The review of appearance models includes surface and image-based models, volumetric and explicit models. Each group is presented with the representative authors' research group and the application and limitations of the methods. In the final part of the chapter, the visualisation of cloth specularity and porosity with an uneven surface is studied. The study and visualisation was performed using image data obtained with photography. The acquisition of structure information on a large scale namely enables the recording of structure irregularities that are very common on historical textiles, laces and also on artistic and experimental pieces of cloth. The contribution ends with the presentation of cloth visualised with the use of specular and alpha maps, which is the result of the image

**Keywords:** 3D visualisation, cloth appearance model, porosity, specularity, image

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Tanja Nuša Kočevar and Helena Gabrijelčič Tomc

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/67736

processing workflow.

processing

**Abstract**

**of Cloth**


**Chapter 3**

## **Modelling and Visualisation of the Optical Properties**

## **of Cloth**

[43] Fleming RW, Dror RO, Adelson EH. Real‐world illumination and the perception of surface

[44] Yang JN, Maloney LT. Illuminant cues in surface color perception: Tests of three candi‐ date cues. Vision Research, 2001;**41**(20):2581–2600. DOI: 10.1016/S0042‐6989(01)00143‐2

[45] Yang JN, Shevell SK. Surface color perception under two illuminants: The second illu‐ minant reduces color constancy. Journal of Vision, 2003;**3**(5):369–379. DOI: 10.1167/3.5.4

[46] Ruppertsberg AI, Bloj M. Reflecting on a room of one reflectance. Journal of Vision,

[47] Ruppertsberg AI, Bloj M, Hurlbert A. Sensitivity to luminance and chromaticity gradi‐ ents in a complex scene. Journal of Vision, 2008;**8**(9):1–16. DOI: 10.1167/8.9.3

[48] Guzelj A, Hladnik A, Bračko S. Examination of colour emotions on a sample of slovenian female population. Tekstilec, 2016;**59**(4):311–320. DOI: 10.14502/Tekstilec2016.59.311‐320

[49] Hedrich M, Bloj M, Ruppertsberg AI. Color constancy improves for real 3D objects.

[50] Bratuž N, Gabrijelčič Tomc H, Javoršek D. Application of CIECAM02 to 3D scene in Blender. Urbas R, editor. In: 7th Symposium of Information and Graphic Arts Technology; June 5–6 2014; Ljubljana, Slovenia. Ljubljana: University of Ljubljana, Faculty of Natural

[51] Bratuž N, Jerman T, Javoršek D. Influence of rendering engines on colour reproduc‐ tion. Novaković D, editor. In: 7th International Symposium on Graphic Engineering and Design; November 13–14; Novi Sad, Serbia. Ljubljana: University of Novi Sad, Faculty of

Technical Sciences, Department of Graphic Engineering and Design; 2014.

2007;**7**(13):1–13. DOI: 10.1167/7.13.12

44 Computer Simulation

Journal of Vision, 2009;**9**(4):1–16. DOI: 10.1167/9.4.16

Sciences and Engineering, Department of Textiles; 2014.

reflectance properties. Journal of Vision, 2003;**3**(5):347–368. DOI: 10.1167/3.5.3

Tanja Nuša Kočevar and Helena Gabrijelčič Tomc

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/67736

#### **Abstract**

Cloth and garment visualisations are widely used in fashion and interior design, entertaining, automotive and nautical industry and are indispensable elements of visual communication. Modern appearance models attempt to offer a complete solution for the visualisation of complex cloth properties. In the review part of the chapter, advanced methods that enable visualisation at micron resolution, methods used in three-dimensional (3D) visualisation workflow and methods used for research purposes are presented. Within the review, those methods offering a comprehensive approach and experiments on explicit clothes attributes that present specific optical phenomenon are analysed. The review of appearance models includes surface and image-based models, volumetric and explicit models. Each group is presented with the representative authors' research group and the application and limitations of the methods. In the final part of the chapter, the visualisation of cloth specularity and porosity with an uneven surface is studied. The study and visualisation was performed using image data obtained with photography. The acquisition of structure information on a large scale namely enables the recording of structure irregularities that are very common on historical textiles, laces and also on artistic and experimental pieces of cloth. The contribution ends with the presentation of cloth visualised with the use of specular and alpha maps, which is the result of the image processing workflow.

**Keywords:** 3D visualisation, cloth appearance model, porosity, specularity, image processing

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **1. Introduction**

The development of three-dimensional (3D) modelling of cloth appearance is of great interest for various areas activity that requires precise and realistic visualisation of cloth surface, especially its optical characteristics. For some purposes—in fashion, in the automotive and film industries, in architecture, for the preservation and visualisation of cultural heritage and so on—detailed visualisations of fabrics are needed. There are certain characteristics of textile texture that are particularly challenging to visualise, such as the porosity of the structure, which can be very complex, for example, in the case of a lace structure, the translucency of fibres and the unevenness of cloth surface (e.g. in the case of worn materials). When visualising a textile material, both the purpose of the visualisation and the viewing distance have to be considered.

For the final appearance, a particular optical phenomenon of the entire structural hierarchy at a constructional and compositional level should be taken into consideration to enable an

Modelling and Visualisation of the Optical Properties of Cloth

http://dx.doi.org/10.5772/67736

47

Cloth properties that contribute to visual appearance and are included in appearance-mod-

**4.** texture and relief (type of weave, fibre and yarn construction parameters and finishing)

**5.** specific properties (anisotropy, yarns and fibres with special effects and higher translucency).

Recently, Schröder et al. [5] and Khungurun et al. [6] have reviewed appearance models and categorised them into three main types of approaches: *surface and image-based*, *volumetric and fibre-based models* and the *explicit approaches of modelling*. It should not be overlooked that some methods implement the combination of fundamentals of different type of appearance models. The cues that Schröder et al. [5] systematically defined and analysed in the study were translucency, silhouette, light diffusion, the possibility of real-time rendering, scalability, integra-

In surface-based models, different reflectance and texture functions *bidirectional reflectance distribution functions* (*BRDF*), *bidirectional scattering distribution functions* (*BSDF*), *bidirectional texture functions* (*BTF*), *bidirectional curve scattering distribution function* (*BCSDF*) and *bidirectional fibre scattering distribution function* (*BFSDF*) are implemented. Here, the fabric is represented as a two-dimensional surface (mesh, curve), what has as a consequence the limitations as incorrectness of presenting the 3D silhouette on micro- and macro-level and fabric edges. Moreover, when the simple shape-based approaches that include texture images and relief maps are implemented, there is an insufficient correctness of visualisation of optical phenomena on fibres level and missing accuracy of anisotropic shading [5, 6]. In general, surfacebased models are scalable and used for far and medium viewing distance. They can reproduce translucency and can be implemented in real-time solutions; moreover, their integration scale

The principle of *BRDF* and in general terms *BSDF* models [7, 8] is the simplification of reflectance and scattering presented as a function of four real variables that define how light is reflected (scattered) at a surface in dependence of optical properties of materials. The function,

accurate generation and evaluation of surface effects [4].

**3. Advanced appearance models**

tion scale and viewing distance.

is on the level of the composition [5].

**3.1. Surface-based and image-based models**

**1.** optical properties (reflection, scattering, transmission and absorption)

**3.** colour (optical properties of fibres and yarns and constructional parameters)

elling are as follows:

**2.** porosity

In the review part of the chapter, we present an overview of the methods used for cloth appearance, whereas in the final part of the manuscript, we put substantial focus on the visualisation of cloths with uneven surfaces, which is often the case in the visualisation of cultural heritage. We also present the results of our research, which includes a method of image processing to generate a specular and alpha map for the visualisation of cloth surface. A cloth surface with considerably uneven texture cannot be visualised with methods that generate even or patterned relief, while unevenness can be the result of fabric damage or use. Therefore, data should be obtained using a 3D scanning method or with photo documentation. For this purpose, it is extremely important to record information on a very large sample surface to obtain a distribution of structure irregularities. On the other hand, taking into account the wide range of possible applications, the model should not have excessively detailed topology.

## **2. Visualisation of the optical and constructional parameters of cloth**

In computer graphics, the cloth modelling includes cloth geometry, cloth deformation and simulation and cloth appearance models. The basic idea of the last is the calculation of data to give a realistic appearance of the virtual representation from real-world data at the fibre, yarn and fabric levels [1, 2]. In real-world environments, the appearance of cloth depends on three main conditions: light source, the optical-reflective properties of the material and surface and the observer's visual perception [3]. These three conditions are also considered in mathematical computations for the graphic visualisation of cloth. The optical properties of cloth objects (at the level of the final cloth and at the micro level, i.e. yarns and fibres) are defined by their chemical and physical properties and yarn/fabric structures. In the virtual world, the optical properties of the objects are represented as mathematical abstractions, with a set of three types of data being the most important: geometrical data (object topology), image data (texture and maps) and data on electromagnetic wave and light phenomena (reflectance models and rendering algorithms).

Due to the multi-layered structure of textiles, the final calculation of optical phenomena on their surface is very complex, i.e. at all three levels (fibre, yarn and fabric), the actions of electromagnetic wave and light phenomena can be described with the equation: *incident light = portion of reflected light + portion of absorbed light + portion of scattered light + portion of transmitted light*.

For the final appearance, a particular optical phenomenon of the entire structural hierarchy at a constructional and compositional level should be taken into consideration to enable an accurate generation and evaluation of surface effects [4].

Cloth properties that contribute to visual appearance and are included in appearance-modelling are as follows:


**1. Introduction**

46 Computer Simulation

The development of three-dimensional (3D) modelling of cloth appearance is of great interest for various areas activity that requires precise and realistic visualisation of cloth surface, especially its optical characteristics. For some purposes—in fashion, in the automotive and film industries, in architecture, for the preservation and visualisation of cultural heritage and so on—detailed visualisations of fabrics are needed. There are certain characteristics of textile texture that are particularly challenging to visualise, such as the porosity of the structure, which can be very complex, for example, in the case of a lace structure, the translucency of fibres and the unevenness of cloth surface (e.g. in the case of worn materials). When visualising a textile material, both the purpose of the visualisation and the viewing distance have to be considered. In the review part of the chapter, we present an overview of the methods used for cloth appearance, whereas in the final part of the manuscript, we put substantial focus on the visualisation of cloths with uneven surfaces, which is often the case in the visualisation of cultural heritage. We also present the results of our research, which includes a method of image processing to generate a specular and alpha map for the visualisation of cloth surface. A cloth surface with considerably uneven texture cannot be visualised with methods that generate even or patterned relief, while unevenness can be the result of fabric damage or use. Therefore, data should be obtained using a 3D scanning method or with photo documentation. For this purpose, it is extremely important to record information on a very large sample surface to obtain a distribution of structure irregularities. On the other hand, taking into account the wide range of possible applications, the model should not have excessively detailed topology.

**2. Visualisation of the optical and constructional parameters of cloth**

In computer graphics, the cloth modelling includes cloth geometry, cloth deformation and simulation and cloth appearance models. The basic idea of the last is the calculation of data to give a realistic appearance of the virtual representation from real-world data at the fibre, yarn and fabric levels [1, 2]. In real-world environments, the appearance of cloth depends on three main conditions: light source, the optical-reflective properties of the material and surface and the observer's visual perception [3]. These three conditions are also considered in mathematical computations for the graphic visualisation of cloth. The optical properties of cloth objects (at the level of the final cloth and at the micro level, i.e. yarns and fibres) are defined by their chemical and physical properties and yarn/fabric structures. In the virtual world, the optical properties of the objects are represented as mathematical abstractions, with a set of three types of data being the most important: geometrical data (object topology), image data (texture and maps) and data on electromagnetic wave and light phenomena (reflectance models and rendering algorithms). Due to the multi-layered structure of textiles, the final calculation of optical phenomena on their surface is very complex, i.e. at all three levels (fibre, yarn and fabric), the actions of electromagnetic wave and light phenomena can be described with the equation: *incident light = portion of reflected light + portion of absorbed light + portion of scattered light + portion of transmitted light*.


## **3. Advanced appearance models**

Recently, Schröder et al. [5] and Khungurun et al. [6] have reviewed appearance models and categorised them into three main types of approaches: *surface and image-based*, *volumetric and fibre-based models* and the *explicit approaches of modelling*. It should not be overlooked that some methods implement the combination of fundamentals of different type of appearance models. The cues that Schröder et al. [5] systematically defined and analysed in the study were translucency, silhouette, light diffusion, the possibility of real-time rendering, scalability, integration scale and viewing distance.

### **3.1. Surface-based and image-based models**

In surface-based models, different reflectance and texture functions *bidirectional reflectance distribution functions* (*BRDF*), *bidirectional scattering distribution functions* (*BSDF*), *bidirectional texture functions* (*BTF*), *bidirectional curve scattering distribution function* (*BCSDF*) and *bidirectional fibre scattering distribution function* (*BFSDF*) are implemented. Here, the fabric is represented as a two-dimensional surface (mesh, curve), what has as a consequence the limitations as incorrectness of presenting the 3D silhouette on micro- and macro-level and fabric edges. Moreover, when the simple shape-based approaches that include texture images and relief maps are implemented, there is an insufficient correctness of visualisation of optical phenomena on fibres level and missing accuracy of anisotropic shading [5, 6]. In general, surfacebased models are scalable and used for far and medium viewing distance. They can reproduce translucency and can be implemented in real-time solutions; moreover, their integration scale is on the level of the composition [5].

The principle of *BRDF* and in general terms *BSDF* models [7, 8] is the simplification of reflectance and scattering presented as a function of four real variables that define how light is reflected (scattered) at a surface in dependence of optical properties of materials. The function, with the schematic presentation in **Figure 1**, presents the flow of *radiance* that is emitting from the 3D object in the direction of the observer, depending on the direction, angle of incident radiance and position. In the function *BRDFλ* (*θ<sup>i</sup> , θ<sup>r</sup> , ϕ<sup>i</sup> , ϕ<sup>r</sup> , u, v*) and **Figure 1**, *Lr* is radiance, i.e. reflected radiance form material on space angle and projected surface, *Ei* is irradiance, i.e. intensity of incident light on surface of material, angles *θ<sup>i</sup>* and *θ<sup>r</sup>* are zenith angle between irradiance and radiance and normal vector on the surface (*z*); angles *ϕ<sup>i</sup>* and *ϕ<sup>r</sup>* are azimuth angles between orthogonal axis of irradiance and radiance; and *u* and *v* are position parameters.

analysed and compared in his doctoral research. Here, the model for calculation of specular highlights in the texture and the BRDF of polyester lining cloth are presented. The results of the thesis presented the collection of fabric visualisations, in which photorealism, perhaps also due to some theoretical assumptions, can be discussed in comparison of the results of

The above-mentioned research continued with the publication of Irawan and Marschner [11] that presented the analysis of the specular and diffuse reflection from woven cloth mainly influencing the optical properties of the fabric. In their research, the procedural scattering model for diffuse light reflection, which calculates the reflection in dependence of texture, is proposed. The research was performed for the variety of fabric samples, including natural and synthetic fibres and staple and filament yarns. Different weave patterns were also included in the analysis (plain, satin and twill). The model is based on the analysis of specular reflectance of light from fibres and simulates the finest fabric surfaces. The model is not data-driven and includes physical parameters as geometry of the fibres and yarns and the weave pattern. The results in the experimental part are evaluated in comparison with high-resolution video of the

The *bidirectional texture function* (*BTF*) is an image-based representation of appearance as a function of viewing and illumination direction [16]. A BTF is a function of six variables and

> *, ϕ<sup>i</sup> , θ<sup>o</sup> , ϕ<sup>o</sup>*

*respectively*« [17]. Dana et al. firstly introduced this function in 1999, when the new BTF-based workflow for CG representation of over than 60 different samples was defined. In their report, the samples were observed with over 200 different viewing/illumination combinations. With the involvement of BTF, the authors introduced a new surface appearance taxonomy as is presented in **Figure 2**, where the difference in surface appearance between fixed and varied viewing and illumination directions can be observed. At fixed viewing/illumination directions, reflectance is used at coarse-scale observation and texture at fine-scale observation, and

Sattler et al. [17] presented a method of determining the BTF of cloths and materials with similar reflectance behaviour including view-dependent texture-maps and using a principal component analysis of the original data. The novelty of their work was also the point light

*), which connects for each surface point* 

Modelling and Visualisation of the Optical Properties of Cloth

http://dx.doi.org/10.5772/67736

49

*, ϕ<sup>o</sup> ), (θ<sup>i</sup> , ϕ<sup>i</sup> )*,

real fabric and the *BTF* calculations of analysed fabrics were performed.

at varied viewing/illumination directions, BRDF and BTF are used.

**Figure 2.** Surface appearance at fixed and varied viewing and illumination directions [16].

*(x, y) of a flat sample and the outgoing to the incoming radiance in the direction (θ<sup>o</sup>*

six-dimensional »*reflectance field L = L(x, y, θ<sup>i</sup>*

modern approaches.

In the researches, Ashikmin [10], Irawan and Marschner [11] and Sadeghi [12, 13] used surface representation of fabric geometry that included functions such as *BRDF*, *BSDF* and the collection of various texture data.

Ashikmin [10] presented a *microfacet BRDF* model that solves the modelling of shape of highlights and enables maintaining of reciprocity and energy conservation. The function was tested on various materials, including satin and velvet fabric.

Adabala et al. [14] used *weave information file* (*WIF*) to obtain weave pattern. This format involves threading information, the definition of threading of the warp threads and a lift plan (the weft pattern). The colour scheme for weave pattern is defined with pattern mix and the colour combination and colour information of warp and weft threads. In their research, three weave patterns (one grey scale map and two colour maps) were used. The focus was on both, cloth modelling for the distant and close-up viewing of the material and the development of reflectance models that covered the both viewing conditions. The *Cook-Torrance microfacet BRDF* model was employed. Transmission, transmission of light through gaps and colour bleeding through fibres were also considered and calculated.

Irawan [15] presented the goniometric measuring method of the *anisotropic BRDF* for four textile fibres and three weave patterns. Besides, he proposed a reflectance model defined on the basis of specular scattering from fibres composing yarns (consequently weave pattern). For the representation of the geometry, physical-based models and data-driven models BTF are

**Figure 1.** Schematic presentation of BRDF [9].

analysed and compared in his doctoral research. Here, the model for calculation of specular highlights in the texture and the BRDF of polyester lining cloth are presented. The results of the thesis presented the collection of fabric visualisations, in which photorealism, perhaps also due to some theoretical assumptions, can be discussed in comparison of the results of modern approaches.

with the schematic presentation in **Figure 1**, presents the flow of *radiance* that is emitting from the 3D object in the direction of the observer, depending on the direction, angle of incident

between orthogonal axis of irradiance and radiance; and *u* and *v* are position parameters.

In the researches, Ashikmin [10], Irawan and Marschner [11] and Sadeghi [12, 13] used surface representation of fabric geometry that included functions such as *BRDF*, *BSDF* and the collec-

Ashikmin [10] presented a *microfacet BRDF* model that solves the modelling of shape of highlights and enables maintaining of reciprocity and energy conservation. The function was

Adabala et al. [14] used *weave information file* (*WIF*) to obtain weave pattern. This format involves threading information, the definition of threading of the warp threads and a lift plan (the weft pattern). The colour scheme for weave pattern is defined with pattern mix and the colour combination and colour information of warp and weft threads. In their research, three weave patterns (one grey scale map and two colour maps) were used. The focus was on both, cloth modelling for the distant and close-up viewing of the material and the development of reflectance models that covered the both viewing conditions. The *Cook-Torrance microfacet BRDF* model was employed. Transmission, transmission of light through gaps and colour

Irawan [15] presented the goniometric measuring method of the *anisotropic BRDF* for four textile fibres and three weave patterns. Besides, he proposed a reflectance model defined on the basis of specular scattering from fibres composing yarns (consequently weave pattern). For the representation of the geometry, physical-based models and data-driven models BTF are

i.e. reflected radiance form material on space angle and projected surface, *Ei*

*, θ<sup>r</sup> , ϕ<sup>i</sup> , ϕ<sup>r</sup>*

and *θ<sup>r</sup>*

*, u, v*) and **Figure 1**, *Lr*

and *ϕ<sup>r</sup>*

is radiance,

is irradiance, i.e.

are azimuth angles

are zenith angle between irra-

radiance and position. In the function *BRDFλ* (*θ<sup>i</sup>*

tion of various texture data.

48 Computer Simulation

**Figure 1.** Schematic presentation of BRDF [9].

intensity of incident light on surface of material, angles *θ<sup>i</sup>*

diance and radiance and normal vector on the surface (*z*); angles *ϕ<sup>i</sup>*

tested on various materials, including satin and velvet fabric.

bleeding through fibres were also considered and calculated.

The above-mentioned research continued with the publication of Irawan and Marschner [11] that presented the analysis of the specular and diffuse reflection from woven cloth mainly influencing the optical properties of the fabric. In their research, the procedural scattering model for diffuse light reflection, which calculates the reflection in dependence of texture, is proposed. The research was performed for the variety of fabric samples, including natural and synthetic fibres and staple and filament yarns. Different weave patterns were also included in the analysis (plain, satin and twill). The model is based on the analysis of specular reflectance of light from fibres and simulates the finest fabric surfaces. The model is not data-driven and includes physical parameters as geometry of the fibres and yarns and the weave pattern. The results in the experimental part are evaluated in comparison with high-resolution video of the real fabric and the *BTF* calculations of analysed fabrics were performed.

The *bidirectional texture function* (*BTF*) is an image-based representation of appearance as a function of viewing and illumination direction [16]. A BTF is a function of six variables and six-dimensional »*reflectance field L = L(x, y, θ<sup>i</sup> , ϕ<sup>i</sup> , θ<sup>o</sup> , ϕ<sup>o</sup> ), which connects for each surface point (x, y) of a flat sample and the outgoing to the incoming radiance in the direction (θ<sup>o</sup> , ϕ<sup>o</sup> ), (θ<sup>i</sup> , ϕ<sup>i</sup> )*, *respectively*« [17]. Dana et al. firstly introduced this function in 1999, when the new BTF-based workflow for CG representation of over than 60 different samples was defined. In their report, the samples were observed with over 200 different viewing/illumination combinations. With the involvement of BTF, the authors introduced a new surface appearance taxonomy as is presented in **Figure 2**, where the difference in surface appearance between fixed and varied viewing and illumination directions can be observed. At fixed viewing/illumination directions, reflectance is used at coarse-scale observation and texture at fine-scale observation, and at varied viewing/illumination directions, BRDF and BTF are used.

Sattler et al. [17] presented a method of determining the BTF of cloths and materials with similar reflectance behaviour including view-dependent texture-maps and using a principal component analysis of the original data. The novelty of their work was also the point light

**Figure 2.** Surface appearance at fixed and varied viewing and illumination directions [16].

sources that enable smooth shadow boundaries on the geometry during the data acquisition. Besides, the system was sensitive to geometrical complexity and the sampling density of the environment map, on the basis of which the illumination could be changed interactively.

from the cylindrical fibre, where longitudinal angles are computed regarding to the normal plane and the azimuth angles are calculated based on the local surface normal direction.

Modelling and Visualisation of the Optical Properties of Cloth

http://dx.doi.org/10.5772/67736

51

Iwasaki et al. [20] presented the research about interactive rendering (and interactive editing of parameters for scattering function) of static cloth with dynamic viewpoints and lighting. The method implemented micro cylinder model presenting weaving pattern and patch for calculation of light reflectance with the integration of environment lighting, visibility function, scattering function and weighting function. Their method includes the use of the gradient of signed distance function to the visibility boundary where the binary visibility changes.

These models are geometric models of micro geometry that are combined with advanced rendering techniques (global illumination). Here, the micro geometry can be generated procedurally and optical properties of fibres have to be defined with the measurements. Two crucial parameter of fibres have to be considered: anisotropic highlights and translucency

The fundamentals of light scattering models for fibres that are used in various modern solutions were developed by Marschner et al. [21]. The calculations consider the fibres to be very thin and with long structures, which diameter is very small in comparison to viewing and lighting distance. Consequently, the fibres can be approximated as curves in the scenes and a far-field approximation and curve radiance and curve irradiance are implemented, resulting in bidirectional far-field scattering distribution function for curves *BCSDF* (*Bidirectional Curve* 

Zinke and Weber [22] upgrade the *BCSDF* in *Bidirectional Fibre Scattering Distribution Function* (*BFSDF*) that is a more general approach for light scattering from filaments. In their methods, different types of scattering functions for filaments were used and parameterized for the minimum enclosing cylinder, in which the *BFSDF* calculates the transfer of radiance (**Figure 5**).

**Figure 4.** Light scattering from fibres, where R is surface reflection, TT is transmission and TRT is side reflection [5].

*3.1.1. Light scattering models for fibres*

*Scattering Distribution Function*).

(**Figure 4**) [5].

In the research of Wang et al. [18], the focus was on spatial variations and anisotropy in cloth appearance. The data acquisition device is presented in **Figure 3**. For the definition of the reflection of the light at a single surface point, they used a *data-driven microfacet-based BRDF*, i.e. a six-dimensional *spatially varying bidirectional reflectance distribution function (SVBRDF) ρ(x,i,o)* [19]. In **Figure 3**, *x* is surface point, *i* is lighting direction, *o* is viewing direction, *h* is half-angle vector, *n* is upward normal direction, *ρ*(*x, i, o*) is BRDF at surface point and *Ω+* is hemisphere of {*h|h · n* > 0}. Besides, the microfacet *2D normal distribution function* (*NDF*) was implemented. In their research, the SVBRDF was modelled from images of a surface that was acquired from a single view. In their results, they confirmed the reliability of their method, which generates anisotropic, spatially-varying surface reflectance that is comparable with real measured appearance and was tested on various materials.

In some recent researches, the accuracy of BTF was discussed. In the review of appearance modelling methods by Schröder et al. [5] the explanation that the implementation of BTF can limit the appearance results is presented. Namely, the application of BTF can result in incorrectness of modelling of shadow boundaries. Some issues can occur also in the modelling of areas of high curvature that are not represented adequately when BTF is measured with the flat sample as a reference. Moreover, the reproduction of transparency and silhouette correctness are challenging with the use of this function.

The researches performed by Sadeghi and his colleagues [12, 13] present the appearance models for visualisation of microstructures. In the last part of the dissertation [12], after the presentation of the models for rendering rainbows and hair, appearance model for rendering cloth is introduced. Here, the focus were the measurements of BRDF of various fabric samples and yarns, which brought to the development of a new analytical BRDF model for threads. The method includes the measurements of profile of reflected light from various types of real yarns. After the measuring, the evaluation and comparison with the reflectance behaviour of the yarns that were predicted with the appearance model was performed, taking into account the fabric composition and yarn parameters [13]. A geometry of light reflection is calculated

**Figure 3.** Data acquisition device and *microfacet-based SVBRDF* model [18].

from the cylindrical fibre, where longitudinal angles are computed regarding to the normal plane and the azimuth angles are calculated based on the local surface normal direction.

Iwasaki et al. [20] presented the research about interactive rendering (and interactive editing of parameters for scattering function) of static cloth with dynamic viewpoints and lighting. The method implemented micro cylinder model presenting weaving pattern and patch for calculation of light reflectance with the integration of environment lighting, visibility function, scattering function and weighting function. Their method includes the use of the gradient of signed distance function to the visibility boundary where the binary visibility changes.

### *3.1.1. Light scattering models for fibres*

is

sources that enable smooth shadow boundaries on the geometry during the data acquisition. Besides, the system was sensitive to geometrical complexity and the sampling density of the environment map, on the basis of which the illumination could be changed interactively.

In the research of Wang et al. [18], the focus was on spatial variations and anisotropy in cloth appearance. The data acquisition device is presented in **Figure 3**. For the definition of the reflection of the light at a single surface point, they used a *data-driven microfacet-based BRDF*, i.e. a six-dimensional *spatially varying bidirectional reflectance distribution function (SVBRDF) ρ(x,i,o)* [19]. In **Figure 3**, *x* is surface point, *i* is lighting direction, *o* is viewing direction, *h* is half-angle vector, *n* is upward normal direction, *ρ*(*x, i, o*) is BRDF at surface point and *Ω+*

hemisphere of {*h|h · n* > 0}. Besides, the microfacet *2D normal distribution function* (*NDF*) was implemented. In their research, the SVBRDF was modelled from images of a surface that was acquired from a single view. In their results, they confirmed the reliability of their method, which generates anisotropic, spatially-varying surface reflectance that is comparable with real

In some recent researches, the accuracy of BTF was discussed. In the review of appearance modelling methods by Schröder et al. [5] the explanation that the implementation of BTF can limit the appearance results is presented. Namely, the application of BTF can result in incorrectness of modelling of shadow boundaries. Some issues can occur also in the modelling of areas of high curvature that are not represented adequately when BTF is measured with the flat sample as a reference. Moreover, the reproduction of transparency and silhouette correct-

The researches performed by Sadeghi and his colleagues [12, 13] present the appearance models for visualisation of microstructures. In the last part of the dissertation [12], after the presentation of the models for rendering rainbows and hair, appearance model for rendering cloth is introduced. Here, the focus were the measurements of BRDF of various fabric samples and yarns, which brought to the development of a new analytical BRDF model for threads. The method includes the measurements of profile of reflected light from various types of real yarns. After the measuring, the evaluation and comparison with the reflectance behaviour of the yarns that were predicted with the appearance model was performed, taking into account the fabric composition and yarn parameters [13]. A geometry of light reflection is calculated

measured appearance and was tested on various materials.

ness are challenging with the use of this function.

50 Computer Simulation

**Figure 3.** Data acquisition device and *microfacet-based SVBRDF* model [18].

These models are geometric models of micro geometry that are combined with advanced rendering techniques (global illumination). Here, the micro geometry can be generated procedurally and optical properties of fibres have to be defined with the measurements. Two crucial parameter of fibres have to be considered: anisotropic highlights and translucency (**Figure 4**) [5].

The fundamentals of light scattering models for fibres that are used in various modern solutions were developed by Marschner et al. [21]. The calculations consider the fibres to be very thin and with long structures, which diameter is very small in comparison to viewing and lighting distance. Consequently, the fibres can be approximated as curves in the scenes and a far-field approximation and curve radiance and curve irradiance are implemented, resulting in bidirectional far-field scattering distribution function for curves *BCSDF* (*Bidirectional Curve Scattering Distribution Function*).

Zinke and Weber [22] upgrade the *BCSDF* in *Bidirectional Fibre Scattering Distribution Function* (*BFSDF*) that is a more general approach for light scattering from filaments. In their methods, different types of scattering functions for filaments were used and parameterized for the minimum enclosing cylinder, in which the *BFSDF* calculates the transfer of radiance (**Figure 5**).

**Figure 4.** Light scattering from fibres, where R is surface reflection, TT is transmission and TRT is side reflection [5].

In their work, Schröder et al. [27] introduced a concept called *local visibility* and used a *Gaussian mixture model* as the approximation of the fibre distribution. This parameter is able to predict self-shadowing and calculates the correlation between defined eye rays and shadow rays. It considers voxels and the size of a yarn cross section and is a fundamental of the function *bidirectional visibility distribution function* (*BVDF*). Besides, the authors presented an effective fibre density, which is calculated with the sum of contributions of line segments representing a cloth and intersecting with a certain voxel. The voxelized cloth was finally rendered with

Modelling and Visualisation of the Optical Properties of Cloth

http://dx.doi.org/10.5772/67736

53

Zhao et al. [28] shortly reviewed the volume imaging technologies (CT, magnetic resonance and ultrasound) and discussed their limitation in the sense of their inability for acquiring data that represent direct optical appearance of the material. Besides, the volume rendering and volumetric appearance models are the focus of the paper's introductory part, discussing the complexity of developing the volumetric models that result in physical accuracy and the limitations of procedural methods, which do not consider and calculate the irregularities of cloth. The phenomena of fabric structural and yarn unevenness are crucial for the representation of the natural and organic appearance of cloths. The experiment introduced a method that combines acquisition of volume models, generated from density data of X-ray computed tomography (CT) scans and appearance data from photographs. The authors used a modified volume scattering model [29, 30] that describes accurately the anisotropy of the fibres. Due to the very small area that was scanned with CT, in which the result was a very detailed volume reconstruction at a resolution of singular fibre, the represented data have to be augmented and computed with the use of density and orientation fields in the volume that defines the scattering model parameters. Consequently, a highly detail volume appearance

**Figure 6.** Generation of a volumetric yarn segment. The fluff density distribution in the upper left corner is rotated along

*Monte Carlo path tracing rendering technique*.

the path of the yarn to yield a yarn segment [26].

**Figure 5.** Scattering model from a fibre: *bidirectional scattering-surface reflectance distribution function* (*BSSRDF*) at the actual surface of the fibre—left and *bidirectional fibre scattering distribution function* (*BFSDF*) at the local minimum of the cylinder—right [5].

### **3.2. Volumetric models**

Volumetric models consider certain unsolved issues of surface and image-based models by calculating the thickness and fuzziness of the fibres and yarns. The problems of consistent silhouettes and light diffusion in difficult-to-access object areas and accuracy of observation at medium distance are solved with various methods and their combination: with the use of volumetric light transport; the examination of microscopic structures and the processing of computed tomography (CT) data. Here, the fabric elements (fibres, yarns and cloth) are treated as a volume and light scattering models that are applied are, in many applications, explicit [5]. These models enable translucency, light diffusion and optimal silhouettes formations. Their results are scalable and, in dependence of a type of a model, can be integrated in details at yarn and fibre scale. For real-time solutions, they are not suitable and the viewing distance accuracy is usually medium, however in the modern solutions also medium to close.

In volumetric models, *light transport theory* is usually applied and the calculations for cloth include *energy transfers in anisotropic media*, i.e. *anisotropic light transport computation* occurs [23]. In isotropic media, the properties do not depend on the rotation and viewing angle, whereas in anisotropic media (fibres, hair), the optical properties depend on orientation.

Schröder et al. [5] define two types of volumetric appearance models of cloth, a *micro-flake model* and a *Gaussian mixture model* of fibres. In *micro-flake model,* the material is represented as a collection of idealised mirror flakes. Here, the basis of anisotropic light transport are applied, however volume scattering interactions and orientation of flakes are represented with a directional flake distribution. In *Gaussian mixture model,* fibre location and scattering events are calculated from statistical distribution of intersections between real fibres geometry. Light scattering is additionally calculated with *curve scattering models* (*BCSDF*). The mathematical calculations include the computing of defined yarn that is intersecting a voxel cell, for which a Gaussian directionality, density and material properties are generated.

Before the year 2000, there were only few investigations on the use of volumetric methods in the representations of anisotropic, structured media. First implementations were performed for knitwear and for fur [24, 25] and that was the beginning of the several researches in the last century.

Xu et al. [26] used the so-called *lumislice* (**Figure 6**), a modelling primitive cross-section of yarn that presents a radiance on the level of the fibres (occlusion, shadows, multiple scattering). The sequence of rotated and organised *lumislices* builds the entire structure of a knit-cloth.

In their work, Schröder et al. [27] introduced a concept called *local visibility* and used a *Gaussian mixture model* as the approximation of the fibre distribution. This parameter is able to predict self-shadowing and calculates the correlation between defined eye rays and shadow rays. It considers voxels and the size of a yarn cross section and is a fundamental of the function *bidirectional visibility distribution function* (*BVDF*). Besides, the authors presented an effective fibre density, which is calculated with the sum of contributions of line segments representing a cloth and intersecting with a certain voxel. The voxelized cloth was finally rendered with *Monte Carlo path tracing rendering technique*.

Zhao et al. [28] shortly reviewed the volume imaging technologies (CT, magnetic resonance and ultrasound) and discussed their limitation in the sense of their inability for acquiring data that represent direct optical appearance of the material. Besides, the volume rendering and volumetric appearance models are the focus of the paper's introductory part, discussing the complexity of developing the volumetric models that result in physical accuracy and the limitations of procedural methods, which do not consider and calculate the irregularities of cloth. The phenomena of fabric structural and yarn unevenness are crucial for the representation of the natural and organic appearance of cloths. The experiment introduced a method that combines acquisition of volume models, generated from density data of X-ray computed tomography (CT) scans and appearance data from photographs. The authors used a modified volume scattering model [29, 30] that describes accurately the anisotropy of the fibres. Due to the very small area that was scanned with CT, in which the result was a very detailed volume reconstruction at a resolution of singular fibre, the represented data have to be augmented and computed with the use of density and orientation fields in the volume that defines the scattering model parameters. Consequently, a highly detail volume appearance

**3.2. Volumetric models**

cylinder—right [5].

52 Computer Simulation

Volumetric models consider certain unsolved issues of surface and image-based models by calculating the thickness and fuzziness of the fibres and yarns. The problems of consistent silhouettes and light diffusion in difficult-to-access object areas and accuracy of observation at medium distance are solved with various methods and their combination: with the use of volumetric light transport; the examination of microscopic structures and the processing of computed tomography (CT) data. Here, the fabric elements (fibres, yarns and cloth) are treated as a volume and light scattering models that are applied are, in many applications, explicit [5]. These models enable translucency, light diffusion and optimal silhouettes formations. Their results are scalable and, in dependence of a type of a model, can be integrated in details at yarn and fibre scale. For real-time solutions, they are not suitable and the viewing distance accuracy is usually medium, however in the modern solutions also medium to close. In volumetric models, *light transport theory* is usually applied and the calculations for cloth include *energy transfers in anisotropic media*, i.e. *anisotropic light transport computation* occurs [23]. In isotropic media, the properties do not depend on the rotation and viewing angle, whereas in anisotropic media (fibres, hair), the optical properties depend on orientation.

**Figure 5.** Scattering model from a fibre: *bidirectional scattering-surface reflectance distribution function* (*BSSRDF*) at the actual surface of the fibre—left and *bidirectional fibre scattering distribution function* (*BFSDF*) at the local minimum of the

Schröder et al. [5] define two types of volumetric appearance models of cloth, a *micro-flake model* and a *Gaussian mixture model* of fibres. In *micro-flake model,* the material is represented as a collection of idealised mirror flakes. Here, the basis of anisotropic light transport are applied, however volume scattering interactions and orientation of flakes are represented with a directional flake distribution. In *Gaussian mixture model,* fibre location and scattering events are calculated from statistical distribution of intersections between real fibres geometry. Light scattering is additionally calculated with *curve scattering models* (*BCSDF*). The mathematical calculations include the computing of defined yarn that is intersecting a voxel cell, for which

Before the year 2000, there were only few investigations on the use of volumetric methods in the representations of anisotropic, structured media. First implementations were performed for knitwear and for fur [24, 25] and that was the beginning of the several researches in the last century. Xu et al. [26] used the so-called *lumislice* (**Figure 6**), a modelling primitive cross-section of yarn that presents a radiance on the level of the fibres (occlusion, shadows, multiple scattering). The sequence of rotated and organised *lumislices* builds the entire structure of a knit-cloth.

a Gaussian directionality, density and material properties are generated.

**Figure 6.** Generation of a volumetric yarn segment. The fluff density distribution in the upper left corner is rotated along the path of the yarn to yield a yarn segment [26].

model (including highlights and detailed textures) was created with this appearance matching procedure involving the density and orientation fields extracted from the 3D data. The rendering occurs after the definition of global optical parameters. Here, the photo taken under known but not controlled lighting was used and the optical properties were associated with the acquired volume so that the texture of the rendered volume matched the photo's texture. The procedure defines physically accurate scattering properties in the volume of the analysed material and visually and optically accurately describes the appearance of the cloth at the fibre geometry level (small scale) and at viewing from distance. Moreover, at small and large scale, the appearance of the fabric is natural due to the reconstruction of the irregularities at fibre, yarn and cloth level.

**3.3. Fibre-based explicit methods**

are not appropriate for real-time solutions [5].

and a three-scale transformation synthesis.

ling pipeline are presented.

ing with the setting of fibre and material parameters (**Figure 8**).

**Figure 8.** Inverse engineering pipeline for visual prototyping of cloth [35].

Fibre-based explicit models use representations with very accurate calculation of reflectance properties in intersecting point between light rays and actual fibres geometry and describe the fabric as a collection of individual discrete fibres represented with explicit geometry. These models are computationally very expensive and can be used for the most physically accurate simulations at fibre level accuracy and close viewing distances including the phenomena of translucency, optimal silhouettes and light diffusion. Usually, the results of explicit methods

Modelling and Visualisation of the Optical Properties of Cloth

http://dx.doi.org/10.5772/67736

55

Zhang et al. [34] proposed a solution for the scale-varying representations of woven fabric with the *interlaced/intertwisted displacement subdivision surface* (*IDSS*). The *IDSS* is capable to map the geometric detail on a subdivision surface and generate the fine details at fibre level. The model solves the issues of multiple-view scalability at inter- and intra-scale in woven fabric visualisation due to the implementation of interlaced, intertwisted vector displacement

Schröder et al. [35] presented a pipeline of cloth parameterization from a single image that involves a geometric yarn model and automatic estimation of yarn paths, yarn widths and weave pattern. Their *inverse engineering pipeline* includes input images and coarse optical flow field description, fine flow description of local yarn deformation, fine regularisation of image, output visualisation, the use of active yarn model that procedurally generates fibres, render-

Khungurun [6] presented a fibre-based model including a *surface-based cylindrical fibre representation* on the basis of volumetric representation. The latter is the voxel array that involves a density of the material at a certain voxel and a local direction of the fibre at this specific voxel. The procedure of fibre geometry generation includes: volume decomposition, fibre centre detection, polyline creation and smoothing and radius determination. The entire pipeline includes: (1) a description of a scene geometry and a set of input photographs of a fabric that were acquired at different lighting and viewing conditions at a defined scene; (2) a development of a light scattering model; (3) the implementation of appearance matching that uses gradient descent optimisation to find optimised parameter values of scattering model and evaluates the differences between renderings and photographs with the objective function. The rendering occurs in extended version of *Monte Carlo path tracer*. Their research ends with the comparison of micro geometry constructed from CT scans with the explicit fibre-based method and concludes the analysis that both techniques are very successful in representation of cloths at micron level. In **Figure 9**, (a) fabric geometry creation and (b) appearance model-

After the introduction of micro-CT imaging in volumetric appearance modelling of cloth, Zhao et al. [31] upgraded their research in the next years with the research that presented the cloth modelling process involving the CT technique and the expansion of its use on various fabric and weave patterns with the implementation of a *structure–aware volumetric texture synthesis method*. The process involves two phases: *exemplar creation phase* and *synthesis phase*. In the first phase, the volume data are acquired with CT scans of very small fabric samples describing the density information on a voxel grid, fibre orientation and yarns. The samples database is created with the procedure that tracks the yarns and their trajectories in the volume grid and segments the voxels so that they match the appropriate yarn and automatically detects the yarn crossing patterns. In the second synthesis phase, the input data are the collection of 2D binary data of weave pattern and 2D data describing the warp and weft yarns at yarn intersecting points. As a result, the output volume is generated that represents the fabric structure, which matches the output of the first phase (exemplars created in the first phase). Further, with the purpose to solve the complexity during the rendering of volumetric data, Zhao et al. [32] also introduced a precomputation-based rendering technique with *modular flux transfer,* where exemplar blocks are modelled as a voxel grid and precompute *voxel-tovoxel*, *patch-to-patch* and *patch-to-voxel flux transfer matrices*.

Recently, Zhao et al. [33] proposed the *automatic fitting approach* for the creation of procedural yarns including details at fibre level (**Figure 7**). CT measurements of cotton, rayon, silk and polyester yarns are taken as a basis for computation of procedural description of yarns. Optical properties of the yarns were defined with *Khungurn scattering model* [6]. Renderings occur with *Mitsuba renderer* [30]. The results were compared with photographs, which do on some areas include hairier parts. That was a success of the method, as the combination of very small samples involved in CT acquisition method and procedural approach, usually is not able to reproduce important fabric irregularities. By all means, the method is successful in solving the issue of replication of small pieces of fabric and offers the realistic non-replicating approach to the representation of details.

**Figure 7.** Presentation of the technique for automatic generation procedural representation of yarn including: (a) CT measurements, (b) fitting procedural yarns and (c) final rendering of textiles [33].

### **3.3. Fibre-based explicit methods**

model (including highlights and detailed textures) was created with this appearance matching procedure involving the density and orientation fields extracted from the 3D data. The rendering occurs after the definition of global optical parameters. Here, the photo taken under known but not controlled lighting was used and the optical properties were associated with the acquired volume so that the texture of the rendered volume matched the photo's texture. The procedure defines physically accurate scattering properties in the volume of the analysed material and visually and optically accurately describes the appearance of the cloth at the fibre geometry level (small scale) and at viewing from distance. Moreover, at small and large scale, the appearance of the fabric is natural due to the reconstruction of the irregularities at

After the introduction of micro-CT imaging in volumetric appearance modelling of cloth, Zhao et al. [31] upgraded their research in the next years with the research that presented the cloth modelling process involving the CT technique and the expansion of its use on various fabric and weave patterns with the implementation of a *structure–aware volumetric texture synthesis method*. The process involves two phases: *exemplar creation phase* and *synthesis phase*. In the first phase, the volume data are acquired with CT scans of very small fabric samples describing the density information on a voxel grid, fibre orientation and yarns. The samples database is created with the procedure that tracks the yarns and their trajectories in the volume grid and segments the voxels so that they match the appropriate yarn and automatically detects the yarn crossing patterns. In the second synthesis phase, the input data are the collection of 2D binary data of weave pattern and 2D data describing the warp and weft yarns at yarn intersecting points. As a result, the output volume is generated that represents the fabric structure, which matches the output of the first phase (exemplars created in the first phase). Further, with the purpose to solve the complexity during the rendering of volumetric data, Zhao et al. [32] also introduced a precomputation-based rendering technique with *modular flux transfer,* where exemplar blocks are modelled as a voxel grid and precompute *voxel-to-*

Recently, Zhao et al. [33] proposed the *automatic fitting approach* for the creation of procedural yarns including details at fibre level (**Figure 7**). CT measurements of cotton, rayon, silk and polyester yarns are taken as a basis for computation of procedural description of yarns. Optical properties of the yarns were defined with *Khungurn scattering model* [6]. Renderings occur with *Mitsuba renderer* [30]. The results were compared with photographs, which do on some areas include hairier parts. That was a success of the method, as the combination of very small samples involved in CT acquisition method and procedural approach, usually is not able to reproduce important fabric irregularities. By all means, the method is successful in solving the issue of replication of small pieces of fabric and offers the realistic non-replicating

**Figure 7.** Presentation of the technique for automatic generation procedural representation of yarn including: (a) CT

fibre, yarn and cloth level.

54 Computer Simulation

*voxel*, *patch-to-patch* and *patch-to-voxel flux transfer matrices*.

measurements, (b) fitting procedural yarns and (c) final rendering of textiles [33].

approach to the representation of details.

Fibre-based explicit models use representations with very accurate calculation of reflectance properties in intersecting point between light rays and actual fibres geometry and describe the fabric as a collection of individual discrete fibres represented with explicit geometry. These models are computationally very expensive and can be used for the most physically accurate simulations at fibre level accuracy and close viewing distances including the phenomena of translucency, optimal silhouettes and light diffusion. Usually, the results of explicit methods are not appropriate for real-time solutions [5].

Zhang et al. [34] proposed a solution for the scale-varying representations of woven fabric with the *interlaced/intertwisted displacement subdivision surface* (*IDSS*). The *IDSS* is capable to map the geometric detail on a subdivision surface and generate the fine details at fibre level. The model solves the issues of multiple-view scalability at inter- and intra-scale in woven fabric visualisation due to the implementation of interlaced, intertwisted vector displacement and a three-scale transformation synthesis.

Schröder et al. [35] presented a pipeline of cloth parameterization from a single image that involves a geometric yarn model and automatic estimation of yarn paths, yarn widths and weave pattern. Their *inverse engineering pipeline* includes input images and coarse optical flow field description, fine flow description of local yarn deformation, fine regularisation of image, output visualisation, the use of active yarn model that procedurally generates fibres, rendering with the setting of fibre and material parameters (**Figure 8**).

Khungurun [6] presented a fibre-based model including a *surface-based cylindrical fibre representation* on the basis of volumetric representation. The latter is the voxel array that involves a density of the material at a certain voxel and a local direction of the fibre at this specific voxel. The procedure of fibre geometry generation includes: volume decomposition, fibre centre detection, polyline creation and smoothing and radius determination. The entire pipeline includes: (1) a description of a scene geometry and a set of input photographs of a fabric that were acquired at different lighting and viewing conditions at a defined scene; (2) a development of a light scattering model; (3) the implementation of appearance matching that uses gradient descent optimisation to find optimised parameter values of scattering model and evaluates the differences between renderings and photographs with the objective function. The rendering occurs in extended version of *Monte Carlo path tracer*. Their research ends with the comparison of micro geometry constructed from CT scans with the explicit fibre-based method and concludes the analysis that both techniques are very successful in representation of cloths at micron level. In **Figure 9**, (a) fabric geometry creation and (b) appearance modelling pipeline are presented.

**Figure 8.** Inverse engineering pipeline for visual prototyping of cloth [35].

The very first introduction of image texturing was in 1974 by Catmull [38]. He introduced the idea of tying the texture pattern to the parameter values. The method guarantees that the pattern rotates and moves with the object. It works for smooth, simple patterns painted on the surface but for simulation of rough textures it was not correct. Since then, the use of various texture mapping for defining different parameters of surface appearance was developed and is in use.

Modelling and Visualisation of the Optical Properties of Cloth

http://dx.doi.org/10.5772/67736

57

The map that is most commonly used is a *diffuse or a colour map* that gives surface a colour. Other maps can define *specular reflection*, *normal vector perturbation*, *surface displacement*, *trans-*

Visualisation of roughed, wrinkled and irregular material, firstly introduced by Blinn [39, 40], can be implemented with the *bump mapping*, *i.e.* a texture function for creation of small perturbation of model's surface normals before using them in the intensity lighting calculations. With introducing bump map with its functions, roughness of the material can be visualised, although for best result of macroscopic irregularities they have to be modelled. The result of bump mapping is a material that is illusory bumpy, while the geometry of the model is not changed, therefore it is sufficient for shallow types of roughness. Images or maps for visualisation of bumps are greyscale and simulate surface's height, respectively white represent the

Where the observation of a rendered model is very close, realistic shading of bumpy surfaces provided with bump mapping is not appropriate while the surface profile does not reveal the realistic roughness and the occlusion effects of the bumps are not visible. In that case the use of a *displacement map* is necessary. A displacement map contains perturbations of the surface position. When using the displacement map, the geometric position of points on the surface is displaced. This technique is used to add surface detail to a model with the advantage that it has no limitations on bump height. The type of mapping can be considered as a type of

A *normal map* can be used to replace normals entirely. The normal maps are used for adding details to a model without using more polygons. Geometry can be achieved on all three axes. Normal map should be an RGB image which gives three channels that correspond to the X, Y and Z coordinates of the surface normal. For creation of transparent, semi-transparent or even

*Aliasing* is the artifact that appears when using repetitive images, usually with regular patterns and with high resolution, or animations where the repetition period becomes close to or smaller than the discretization size. The result of such an artifact, where the texture pattern becomes comparable in scale to the raster grid is a moiré pattern that can be highly noticeable

Aliasing artifacts can be solved with the use of an anti-aliasing technique called *MIP mapping* [37]. In 1983, Williams [43] described the MIP mapping or pyramids which are pre-calculated and optimised sequences of images at a variety of different resolutions. The method is used to decrease the rendering time, to improve the image quality. At MIP mapping techniques the texture pattern is stored at a number of resolutions. By the process of filtering and decimation, images with high

resolution are transformed into ones with lower resolution and that eliminates aliasing.

highest parts or depth, black areas represent the lowest parts.

modelling although computationally can be quite demanding [41].

cut-out areas in the surface, a greyscale *alpha map* can be used.

*parency*, *shadows* and others.

in visualised scenes [42].

**Figure 9.** (a) A fabric geometry creation and (b) appearance modelling pipeline [6].

## **4. Texture-based reconstruction of the specularity and porosity of cloth**

In the context of appearance modelling, mathematical appearance models were reviewed, but computationally less expensive techniques should not be overlooked. These latter techniques are firmly established in 3D animation workflow and in the production of static visualisations that include many objects on the scene, which can be viewed especially at medium and far distance.

#### **4.1. Texture mapping**

In 3D technology, texture mapping has been used for realistic visualisation of variety of surfaces for a very long time. Using that technique, 2D details of surface properties, i.e. 2D photographs that provide colour and texture surface information from a given object, are added to a 3D geometrical model. In the aspiration to visualise photo realistically, more than one image, respectively, map is needed. Realistic texture must illustrate the complexity of the material surface, what can be achieved using texture mapping without demanding 3D modelling of every detail [36].

Texture mapping is the method which applies texture to an object's boundary geometry, which can be a polygonal mesh, different types of splines or various level-sets. The polygonal mesh is the most suitable for the method, while texture is simultaneously covering the polygons of an object that can be triangles or quads. This process assigns texture coordinates to the polygon's vertices and the coordinates index a texture image and interpolated across the polygon at each of the polygon's pixel determine the texture image's value [37]. The 2D texture that is applied to polygons on the 3D model can be a tiling or a non-tiling image. Methods for computing texture mapping are often called *mesh parameterization methods* and are based on different concepts of piece-wise linear mapping and differential geometry [36, 37].

In UV mapping, defined parameters can be specified by a user, such as precise location of cuts respectively seams, where the 3D model unfolds into a mesh. A UV map can also be generated totally automatically or can be creating with combination of both.

The very first introduction of image texturing was in 1974 by Catmull [38]. He introduced the idea of tying the texture pattern to the parameter values. The method guarantees that the pattern rotates and moves with the object. It works for smooth, simple patterns painted on the surface but for simulation of rough textures it was not correct. Since then, the use of various texture mapping for defining different parameters of surface appearance was developed and is in use.

The map that is most commonly used is a *diffuse or a colour map* that gives surface a colour. Other maps can define *specular reflection*, *normal vector perturbation*, *surface displacement*, *transparency*, *shadows* and others.

Visualisation of roughed, wrinkled and irregular material, firstly introduced by Blinn [39, 40], can be implemented with the *bump mapping*, *i.e.* a texture function for creation of small perturbation of model's surface normals before using them in the intensity lighting calculations. With introducing bump map with its functions, roughness of the material can be visualised, although for best result of macroscopic irregularities they have to be modelled. The result of bump mapping is a material that is illusory bumpy, while the geometry of the model is not changed, therefore it is sufficient for shallow types of roughness. Images or maps for visualisation of bumps are greyscale and simulate surface's height, respectively white represent the highest parts or depth, black areas represent the lowest parts.

**4. Texture-based reconstruction of the specularity and porosity of cloth**

**Figure 9.** (a) A fabric geometry creation and (b) appearance modelling pipeline [6].

distance.

56 Computer Simulation

**4.1. Texture mapping**

every detail [36].

geometry [36, 37].

In the context of appearance modelling, mathematical appearance models were reviewed, but computationally less expensive techniques should not be overlooked. These latter techniques are firmly established in 3D animation workflow and in the production of static visualisations that include many objects on the scene, which can be viewed especially at medium and far

In 3D technology, texture mapping has been used for realistic visualisation of variety of surfaces for a very long time. Using that technique, 2D details of surface properties, i.e. 2D photographs that provide colour and texture surface information from a given object, are added to a 3D geometrical model. In the aspiration to visualise photo realistically, more than one image, respectively, map is needed. Realistic texture must illustrate the complexity of the material surface, what can be achieved using texture mapping without demanding 3D modelling of

Texture mapping is the method which applies texture to an object's boundary geometry, which can be a polygonal mesh, different types of splines or various level-sets. The polygonal mesh is the most suitable for the method, while texture is simultaneously covering the polygons of an object that can be triangles or quads. This process assigns texture coordinates to the polygon's vertices and the coordinates index a texture image and interpolated across the polygon at each of the polygon's pixel determine the texture image's value [37]. The 2D texture that is applied to polygons on the 3D model can be a tiling or a non-tiling image. Methods for computing texture mapping are often called *mesh parameterization methods* and are based on different concepts of piece-wise linear mapping and differential

In UV mapping, defined parameters can be specified by a user, such as precise location of cuts respectively seams, where the 3D model unfolds into a mesh. A UV map can also be generated

totally automatically or can be creating with combination of both.

Where the observation of a rendered model is very close, realistic shading of bumpy surfaces provided with bump mapping is not appropriate while the surface profile does not reveal the realistic roughness and the occlusion effects of the bumps are not visible. In that case the use of a *displacement map* is necessary. A displacement map contains perturbations of the surface position. When using the displacement map, the geometric position of points on the surface is displaced. This technique is used to add surface detail to a model with the advantage that it has no limitations on bump height. The type of mapping can be considered as a type of modelling although computationally can be quite demanding [41].

A *normal map* can be used to replace normals entirely. The normal maps are used for adding details to a model without using more polygons. Geometry can be achieved on all three axes. Normal map should be an RGB image which gives three channels that correspond to the X, Y and Z coordinates of the surface normal. For creation of transparent, semi-transparent or even cut-out areas in the surface, a greyscale *alpha map* can be used.

*Aliasing* is the artifact that appears when using repetitive images, usually with regular patterns and with high resolution, or animations where the repetition period becomes close to or smaller than the discretization size. The result of such an artifact, where the texture pattern becomes comparable in scale to the raster grid is a moiré pattern that can be highly noticeable in visualised scenes [42].

Aliasing artifacts can be solved with the use of an anti-aliasing technique called *MIP mapping* [37]. In 1983, Williams [43] described the MIP mapping or pyramids which are pre-calculated and optimised sequences of images at a variety of different resolutions. The method is used to decrease the rendering time, to improve the image quality. At MIP mapping techniques the texture pattern is stored at a number of resolutions. By the process of filtering and decimation, images with high resolution are transformed into ones with lower resolution and that eliminates aliasing.

#### **4.2. Work-flow for visualisation of irregular cloth surface**

The review of the appearance modelling models revealed that the visualisation of natural appearance of cloth is still a challenge for procedural and computational approaches. The computer-based solution was presented in the research of Zhao et al. [28], however, the issues in representation of irregular cloth structure samples with morphologic, relief and texture data on the large scale that are crucial for visualisation of cloths demand a special attention. This is the case also of the historical and worn cloth that cannot be modelled procedurally with the methods for fibres, yarns and cloth appearance modelling, but with the accurate image-based techniques. Besides, in the cloth production, airy structures, as laces are extremely difficult to reproduce with ordinary 3D modelling techniques in software for 3D computer graphic and they can be computed with simulation algorithms only to some extend [44]. Special attention should be put also on visualisation of hand-made fabrics, artistic and experimental cloths often used in interior and in fashion design.

At micro level appearance modelling, virtual textile porosity is created as a result of interlacing threads into a textile structure and specularity as a consequence of computations of reflectance of shading algorithms. These models are suitable for visualising at close viewing distances or for predictive renderings, for example in the case of computer-aided design (CAD). Besides, porosity, specularity and other irregularities and structural phenomena, that are visible only when cloth is observed at far viewing distance, should not be overlooked. For instance, one should not obey the uneven distribution of organic volume formation in yarns structure that only after interlaced in the final cloth form a random pattern of manifestation.

method, which enabled the creation of specular and alpha map. Here, the comparison of local and global algorithm techniques [51] was performed and the evaluation of the image analysis results of thresholded images, where different threshold algorithms were implemented.

Modelling and Visualisation of the Optical Properties of Cloth

http://dx.doi.org/10.5772/67736

59

**Figure 10.** The workflow for visualisation of a worn cloth that has heterogeneous structure.

For the porosity, the threshold was defined with three techniques: min. local point of a histogram, manual definition and *Yen* algorithm (which was selected on the basis of the image

The detection of specular areas was found to be significantly dependent on local and global threshold approach and *Percentile* algorithm was finally selected, as other algorithms resulted in threshold images that were unsuitable for further 3D visualisation [53]. Image analysis of detected porous and specular areas enabled the numerical evaluation of areas covered by pores and specular surfaces and the average size and the number of porous and specular areas. Within the image analysis of porosity, special attention was paid to the formation of connected and closed pores in the map for porosity. Here, the connected pores were treated as error, since this phenomenon is not possible to be present in the real fabric. Specular areas manifested different organisation and disposition in dependence of illumination, image processing phase and the type of thresholding algorithm. Further, for the implementation in 3D rendering, the defined specular maps were selected and used with the consideration of loca-

Fabrics were visualised in 3D program Blender using four different maps, the diffuse map that was a photograph, the normal map, the specular and the alpha map. The last two maps were created and analysed through the workflow established in our research work. In **Figure 11**, the results of the workflow including variable (real and virtual) lighting conditions and image processing of specular and alpha maps on cloth visualisations are presented. On the left side

analysis among different ImageJ algorithms) [52].

tion of virtual lights and the specularity appearance of the real fabric.

The aim of our contribution was the analysis and reconstruction of cloth *specularity* (presenting total specular reflectance and partial reflectance) and *porosity* (presenting the translucency of cloth and, in technical terms, a void part of the textile's full volume) with the implementation of *image processing in the workflow* (**Figure 10**), generation of specular and alpha map (map for porosity) and the definition of the optimal application (mapping) on geometrical models. The image-based appearance modelling workflow for accurate visualisation of a worn cloth that had heterogeneous structure on a large scale was established, which originates from photo documentation (image information) of the material. It was crucial for this process to record information on a very large sample surface (microscopic analysis is hence not appropriate), where it was possible to record uneven structures and time-dependent deformation. The analysed sample was a part of the national costume from the Gorenjska region (100% cotton fabric, plain weave, warp density = 20 threads/cm, weft density = 15 threads/cm, Z yarn twist in warp and weft threads).

The review of the references showed that for the analysis and modelling of appearance of uneven textile surfaces, the use of image-processing methods and computationally less demanding virtual representation is sufficient [45, 46]. These workflows focus on the acquisition of specific data, i.e. optical [47–49] and constructional [50], and numerical approaches to extract meaningful information on different levels in dependence of the further data implementation. Following these foundations, various illumination conditions at photo acquisition (the combination of two diffuse, left and right, and one direct light) were analysed in the workflow of our research, followed by two phases of image processing (histogram equalisation and rolling ball algorithm). The processing phases were found to be crucial for the detection of specular and porous areas of interest. A special focus was on the study of the optimal threshold Modelling and Visualisation of the Optical Properties of Cloth http://dx.doi.org/10.5772/67736 59

**Figure 10.** The workflow for visualisation of a worn cloth that has heterogeneous structure.

**4.2. Work-flow for visualisation of irregular cloth surface**

58 Computer Simulation

often used in interior and in fashion design.

The review of the appearance modelling models revealed that the visualisation of natural appearance of cloth is still a challenge for procedural and computational approaches. The computer-based solution was presented in the research of Zhao et al. [28], however, the issues in representation of irregular cloth structure samples with morphologic, relief and texture data on the large scale that are crucial for visualisation of cloths demand a special attention. This is the case also of the historical and worn cloth that cannot be modelled procedurally with the methods for fibres, yarns and cloth appearance modelling, but with the accurate image-based techniques. Besides, in the cloth production, airy structures, as laces are extremely difficult to reproduce with ordinary 3D modelling techniques in software for 3D computer graphic and they can be computed with simulation algorithms only to some extend [44]. Special attention should be put also on visualisation of hand-made fabrics, artistic and experimental cloths

At micro level appearance modelling, virtual textile porosity is created as a result of interlacing threads into a textile structure and specularity as a consequence of computations of reflectance of shading algorithms. These models are suitable for visualising at close viewing distances or for predictive renderings, for example in the case of computer-aided design (CAD). Besides, porosity, specularity and other irregularities and structural phenomena, that are visible only when cloth is observed at far viewing distance, should not be overlooked. For instance, one should not obey the uneven distribution of organic volume formation in yarns structure that only after interlaced in the final cloth form a random pattern of manifestation. The aim of our contribution was the analysis and reconstruction of cloth *specularity* (presenting total specular reflectance and partial reflectance) and *porosity* (presenting the translucency of cloth and, in technical terms, a void part of the textile's full volume) with the implementation of *image processing in the workflow* (**Figure 10**), generation of specular and alpha map (map for porosity) and the definition of the optimal application (mapping) on geometrical models. The image-based appearance modelling workflow for accurate visualisation of a worn cloth that had heterogeneous structure on a large scale was established, which originates from photo documentation (image information) of the material. It was crucial for this process to record information on a very large sample surface (microscopic analysis is hence not appropriate), where it was possible to record uneven structures and time-dependent deformation. The analysed sample was a part of the national costume from the Gorenjska region (100% cotton fabric, plain weave, warp density = 20 threads/cm, weft density = 15 threads/cm, Z yarn twist in warp and weft threads). The review of the references showed that for the analysis and modelling of appearance of uneven textile surfaces, the use of image-processing methods and computationally less demanding virtual representation is sufficient [45, 46]. These workflows focus on the acquisition of specific data, i.e. optical [47–49] and constructional [50], and numerical approaches to extract meaningful information on different levels in dependence of the further data implementation. Following these foundations, various illumination conditions at photo acquisition (the combination of two diffuse, left and right, and one direct light) were analysed in the workflow of our research, followed by two phases of image processing (histogram equalisation and rolling ball algorithm). The processing phases were found to be crucial for the detection of specular and porous areas of interest. A special focus was on the study of the optimal threshold

method, which enabled the creation of specular and alpha map. Here, the comparison of local and global algorithm techniques [51] was performed and the evaluation of the image analysis results of thresholded images, where different threshold algorithms were implemented.

For the porosity, the threshold was defined with three techniques: min. local point of a histogram, manual definition and *Yen* algorithm (which was selected on the basis of the image analysis among different ImageJ algorithms) [52].

The detection of specular areas was found to be significantly dependent on local and global threshold approach and *Percentile* algorithm was finally selected, as other algorithms resulted in threshold images that were unsuitable for further 3D visualisation [53]. Image analysis of detected porous and specular areas enabled the numerical evaluation of areas covered by pores and specular surfaces and the average size and the number of porous and specular areas. Within the image analysis of porosity, special attention was paid to the formation of connected and closed pores in the map for porosity. Here, the connected pores were treated as error, since this phenomenon is not possible to be present in the real fabric. Specular areas manifested different organisation and disposition in dependence of illumination, image processing phase and the type of thresholding algorithm. Further, for the implementation in 3D rendering, the defined specular maps were selected and used with the consideration of location of virtual lights and the specularity appearance of the real fabric.

Fabrics were visualised in 3D program Blender using four different maps, the diffuse map that was a photograph, the normal map, the specular and the alpha map. The last two maps were created and analysed through the workflow established in our research work. In **Figure 11**, the results of the workflow including variable (real and virtual) lighting conditions and image processing of specular and alpha maps on cloth visualisations are presented. On the left side

the definition and implementation of the optimal threshold algorithm was crucial and the results served as maps for image-based modelling. The maps were evaluated with the image analysis so that the pores and specular areas could also be numerically analysed before they were applied in a final 3D reconstruction. The contribution reviewed the importance of cloth appearance modelling in 3D computer graphic and presented the opportunities for further

Modelling and Visualisation of the Optical Properties of Cloth

http://dx.doi.org/10.5772/67736

61

Department of Textile, Graphic Arts and Design, Faculty of Natural Sciences and Engineering,

[1] Jevšnik S., Kalaoglu F., Terliksiz S., Purgaj J. Review of computer models for fabric simulation. Tekstilec, 2014, 57(4), pp. 300–314. DOI: 10.14502/Tekstilec2014.57.300–314.

[2] Magnor M. A., Grau O., Sorkine-Hornung O., Thebalt C. Digital Representations of the Real World. 1st ed., Boca Raton: CRC Press, 2015, pp. 225–238. DOI: 10.1201/b18154-30.

[3] McCluney R. Introduction to radiometry and photometry. 2nd ed., Norwood: Artech

[4] Chen X., Hearle J. W. S. Structural hierarchy in textile materials: an overview. In Modelling and Predicting Textile Behaviour. 1st ed., Cambridge: Woodhead Publishing,

[5] Schröder K., Zhao S., Zinke A. Recent advances in physically-based appearance modeling of cloth. In: SIGGRAPH Asia Courses, Course Notes, Singapore, November 28–

[6] Khungurun P., Schroeder D., Zhao S., Bala K., Marschner S. Matching real fabrics with micro-appearance models. ACM Transactions on Graphic, 35(1), 2015, pp. 1–26. DOI:

[7] Nicodemus F. Directional reflectance and emissivity of an opaque surface. Applied

[8] Bartell F. O., Dereniak E. L., Wolfe W. L. The theory and measurement of bidirectional reflectance distribution function (BRDF) and bidirectional transmittance distribution function (BTDF). In: Hunt G. H. (ed.). Proceedings of SPIE 0257, Radiation Scattering in Optical Systems, Huntsville, March 3, 1980, Vol. 257, pp. 154–160. DOI: 10.1117/12.959611.

December 01, 2012, art. no. 12. DOI: 10.1145/2407783.2407795.

Optics, 1965, 4 (7), pp. 767–775. DOI: 10.1364/AO.4.000767.

developments and implementations.

Tanja Nuša Kočevar and Helena Gabrijelčič Tomc\*

University of Ljubljana, Ljubljana, Slovenia

House Publishers, 2014, 470 p.

2010, pp. 3–37.

10.1145/2818648.

\*Address all correspondence to: helena.gabrijelcic@ntf.uni-lj.si

**Author details**

**References**

**Figure 11.** Left: Emphasised specular areas on cloth sample for diffuse ((a) and (b)) and direct ((c) and (d)) illumination during image acquisition and different positions of virtual lights, and right: (e) the visualisation of porosity with the phenomena of connected and closed pores.

of **Figure 11**, emphasised specular areas are presented for diffuse (a and b) and direct (c and d) illumination during image acquisition. The difference among the samples a and c versus b and d is visible due to different position of virtual lights during rendering, where the manifestation of specularity strongly depends on type of lights (diffuse versus direct). On the right side of **Figure 11**, the visualisation of porosity can be observed with the phenomena of connected and closed pores.

## **5. Conclusions**

Our contribution is a comprehensive review of advanced and less-demanding methods for modelling of cloth appearance.

The models are classified as image-based, surface-based, volumetric and explicit and the advances in these computer-aided approaches and computations for cloth visualisation are discussed regarding their pipeline, procedure complexity and results implementation at fibre, yarn and cloth scale viewing conditions. In the second part of the chapter, a texturebased and image processing-based modelling of porosity and specularity of irregular cloth surface was introduced. For a realistic modelling and visualisation of a worn cloth with heterogeneous structure and surface on a large scale, a detailed image information of the sample was captured and corresponding image processing methods were applied. In the procedure, the definition and implementation of the optimal threshold algorithm was crucial and the results served as maps for image-based modelling. The maps were evaluated with the image analysis so that the pores and specular areas could also be numerically analysed before they were applied in a final 3D reconstruction. The contribution reviewed the importance of cloth appearance modelling in 3D computer graphic and presented the opportunities for further developments and implementations.

## **Author details**

Tanja Nuša Kočevar and Helena Gabrijelčič Tomc\*

\*Address all correspondence to: helena.gabrijelcic@ntf.uni-lj.si

Department of Textile, Graphic Arts and Design, Faculty of Natural Sciences and Engineering, University of Ljubljana, Ljubljana, Slovenia

## **References**

of **Figure 11**, emphasised specular areas are presented for diffuse (a and b) and direct (c and d) illumination during image acquisition. The difference among the samples a and c versus b and d is visible due to different position of virtual lights during rendering, where the manifestation of specularity strongly depends on type of lights (diffuse versus direct). On the right side of **Figure 11**, the visualisation of porosity can be observed with the phenomena of

**Figure 11.** Left: Emphasised specular areas on cloth sample for diffuse ((a) and (b)) and direct ((c) and (d)) illumination during image acquisition and different positions of virtual lights, and right: (e) the visualisation of porosity with the

Our contribution is a comprehensive review of advanced and less-demanding methods for

The models are classified as image-based, surface-based, volumetric and explicit and the advances in these computer-aided approaches and computations for cloth visualisation are discussed regarding their pipeline, procedure complexity and results implementation at fibre, yarn and cloth scale viewing conditions. In the second part of the chapter, a texturebased and image processing-based modelling of porosity and specularity of irregular cloth surface was introduced. For a realistic modelling and visualisation of a worn cloth with heterogeneous structure and surface on a large scale, a detailed image information of the sample was captured and corresponding image processing methods were applied. In the procedure,

connected and closed pores.

phenomena of connected and closed pores.

modelling of cloth appearance.

**5. Conclusions**

60 Computer Simulation


[9] Ceolato R., Rivière N., Hespel L. Biscans B. Probing optical properties of nanomaterials. 12 January 2012, SPIE Newsroom. Available: http://spie.org/newsroom/4047-probingoptical-properties-of-nanomaterials. Accessed 10.1.2017. DOI: 10.1117/2.1201201.004047.

[20] Iwasaki K., Mizutani K., Dobashi Y., Nishita T. Interactive cloth rendering of microcylinder appearance model under environment lighting, Computer Graphics Forum, 2014,

Modelling and Visualisation of the Optical Properties of Cloth

http://dx.doi.org/10.5772/67736

63

[21] Marschner R. S., Jensen W. H., Cammarano M., Worley S., Hanrahan P. Light scattering from human hair fibers. In: ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2003, San Diego, California, July 27–31, New York: ACM, 22(3), 2003,

[22] Zinke A., Weber A. Light scattering from filaments. IEEE Transactions on Visualization and Computer Graphics. 2007, 13(2), pp. 342–356. DOI: 10.1109/TVCG.2007.43.

[23] Jakob W., Arbree A., Moon T. J., Bala K., Marschner S. A radiative transfer framework for rendering materials with anisotropic structure. In: ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2010, Los Angeles, California — July 26–30,

[24] Gröller E., Rau T.R., Straβer W. Modeling and visualization of knitwear, IEEE Transactions on Visualization and Computer Graphics, 1995, 1(4), pp. 302–310. DOI:

[25] Kajiya J. T., Kay T. L. Rendering fur with three dimensional textures. In: SIGGRAPH '89 Proceedings of the 16th annual conference on Computer graphics and interactive techniques, New York: ACM, 1989, 23(3), pp. 271–280. DOI: 10.1145/74333.74361, Available:

[26] Xu Y. Q., Chen Y., Lin S., Zhong H., Wu E., Guo B., Shum H. Y. Photorealistic rendering of knitwear using the lumislice. In: SIGGRAPH '01 Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, New York: ACM, 2001,

[27] Schröder K., Klein R., Zinke A. A volumetric approach to predictive rendering of fabrics. Computer Graphics Forum, 2011, 30(4), pp. 1277–1286. DOI:

[28] Zhao S., Jakob W., Marschner S., Bala K. Building volumetric appearance models of fabric using micro CT imaging. In: ACM Transactions on Graphics (TOG) – Proceedings of ACM SIGGRAPH 2011, Vancouver, British Columbia, Canada — August 07–11, New

York: ACM, 2011, 30(4), article no. 44, pp. 98–105. DOI: 10.1145/2010324.1964939.

2010, New York: ACM, 29(4), art. no. 53, pp. 1–13. DOI: 10.1145/1778765.1778790.

[30] Jakob W. Mitsuba Documentation, Date of publication February 25 2014, date of update 16.7.2014, 2014, Available: http://www.mitsuba-renderer.org/releases/current/documen-

[29] Jakob W., Arbree A., Moon T. J., Bala K., Marschner S. A radiative transfer framework for rendering materials with anisotropic structure. In: ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2010, Los Angeles, California, July 26–30,

https://www.cs.drexel.edu/~david/Classes/CS586/Papers/p271-kajiya.pdf.

2010, New York: ACM, 2010, 29(4), art. no. 53. DOI: 10.1145/1778765.1778790.

33 (2), pp. 333–340. DOI: 10.1111/cgf.12302.

pp. 780–791. DOI: 10.1145/1201775.882345.

pp. 391–398. DOI: 10.1145/383259.383303.

10.1111/j.1467-8659.2011.01987.x.

tation.pdf, 249p. Accessed 17.1.2017.

10.1109/2945.485617.


[20] Iwasaki K., Mizutani K., Dobashi Y., Nishita T. Interactive cloth rendering of microcylinder appearance model under environment lighting, Computer Graphics Forum, 2014, 33 (2), pp. 333–340. DOI: 10.1111/cgf.12302.

[9] Ceolato R., Rivière N., Hespel L. Biscans B. Probing optical properties of nanomaterials. 12 January 2012, SPIE Newsroom. Available: http://spie.org/newsroom/4047-probingoptical-properties-of-nanomaterials. Accessed 10.1.2017. DOI: 10.1117/2.1201201.004047.

[10] Ashikmin M., Premože S., Shirley P. A microfacet-based BRDF generator. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, New York: SIGGRAPH'00, ACM Press/Addison-Wesley Publishing Co., 2000, pp. 65–74. DOI:

[11] Irawan P., Marschner S. Specular reflection from woven cloth. ACM Transactions on

[12] Sadeghi, I. Controlling the Appearance of Specular Microstructures. San Diego: University of California, pp. 131–176. Available: https://www.yumpu.com/en/document/view/13881324/controlling-the-appearance-of-specular-microstructures-com-

[13] Sadeghi I., Bisker O., De Deken J., Jensen W. H. A practical microcylinder appearance model for cloth rendering. ACM Transactions on Graphics, 2013, 32(2), art. no. 14, pp.

[14] Adabala N., Magnenat-Thalmann N., Fei G. Visualization of woven cloth. In: Eurographics Symposium on Rendering 2003, Switzerland: Eurographics Association

[15] Irawan P. Appearance of Woven Cloth, PhD thesis, Cornell: Cornell University, 2008, Available: https://www.cs.cornell.edu/~srm/publications/IrawanThesis.pdf, pp. 15–19.

[16] Dana J. K., Ginneken V. B., Nayar K. S., Koenderink J. J. Reflectance and texture of real-world surfaces. ACM Transactions on Graphics, 18(1), 1999, pp. 1–34. DOI: 10.1145/300776.30

[17] Sattler M., Sarlette R., Klein R. Efficient and realistic visualization of cloth. In: Dutre P., Suykens F., Christensen P. H., Cohen-Or D. L. (eds.). EGRW '03 Proceedings of the 14th Eurographics Symposium on Rendering, The Eurographics Association, Belgium — June 25–27, 2003, Switzerland: Eurographics Association Aire-la-Ville, 2003, pp. 167–177.

[18] Wang J., Zhao S., Tong X., Snyder J., Guo B. Modeling anisotropic surface reflectance with example-based microfacet synthesis. In: Proceedings of ACM SIGGRAPH 2008. ACM Transactions on Graphics, Los Angeles, California, August 11–15, 2008, New York:

[19] Nicodemus F. E., Richmond J. C., Hsia J. J., Ginsberg I. W., Limperis T. Geometric considerations and nomenclature for reflectance. Monograph 161, National Bureau of Standards (US). Available: https://graphics.stanford.edu/courses/cs448-05-winter/ papers/nicodemus-brdf-nist.pdf. Accessed 8.1.2017. DOI: 10.1109/LPT.2009.2020494.

Graphics (TOG), 2012, 31(1), art. no. 11. DOI: 10.1145/2077341.2077352.

Aire-la-Ville, Leuven, Belgium, June 25–27, 2003, pp. 178–185.

ACM, 27(3), art. no. 41, pp. 1–9. DOI: 10.1145/1360612.1360640.

10.1145/344779.344814.

62 Computer Simulation

puter-Accessed 10.1.2017.

Accessed 10.1.2017.

0778.

1–12. DOI: 10.1145/2451236.2451240.

DOI: 10.2312/EGWR/EGWR03/167-177.


[31] Zhao S., Jakob W., Marschner S., Bala K. Structure-aware synthesis for predictive woven fabric appearance. In: ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH, New York: ACM, 2012, 31(4), article no. 75. DOI: 10.1145/2185520.2185571.

[45] Cybulska, M. Reconstruction of archaeological textiles. Fibres & Textiles in Eastern

Modelling and Visualisation of the Optical Properties of Cloth

http://dx.doi.org/10.5772/67736

65

[46] Hu, J., Xin, B. Visualization of textile surface roughness based on silhouette image analysis. Research Journal of Textile and Apparel, 2007, 11(2), pp. 8–20. DOI: 10.1108/

[47] Havlová M. Model of vertical porosity occurring in woven fabrics and its effect on air permeability. Fibres & Textiles in Eastern Europe, 2014, 22, 4(106), pp. 58–63.

[48] Hadjianfar M., Semnani D., Sheikhzadeh M. A new method for measuring luster index based on image processing. Textile Research Journal, 2010, 80(8), pp. 726–733. DOI:

[49] Jong-Jun K. Image analysis of luster images of woven fabrics and yarn bundle simulation in the weave - cotton, silk, and velvet fabrics. Journal of Fashion Business, 2002, 6(6),

[50] Swery E. E., Allen T., Piaras K. Automated tool to determine geometric measurements of woven textiles using digital image analysis techniques. Textile Research Journal, 2016,

[51] Kočevar T. N., Gabrijelčič T. H. Analysis of different threshold algorithms for definition of specular areas of relief, interlaced structures. In: Pavlovič Ž. (ed.). Proceedings, 8th International Symposium on Graphic Engineering and Design GRID 2016, Novi Sad, November 3–4, 2016, Novi Sad: Faculty of Technical Sciences, Department of Graphic

[52] Kočevar T. N., Gabrijelčič T. H. 3D visualisation of woven fabric porosity. Tekstilec, 2016,

[53] Kočevar T. N., Gabrijelčič T. H. 3D visualisation of specularity of woven fabrics. Tekstilec,

Europe, 2010, 18, 3(80), pp. 100–105.

RJTA-11-02-2007-B002.

10.1177/0040517509343814.

86(6), pp. 618–635. DOI: 10.1177/0040517515595031.

59(1), pp. 28–40. DOI: 10.14502/Tekstilec2016.59.28–40.

2016, 59(4), pp. 335–349. DOI: 10.14502/Tekstilec2016.59.28–40.

Engineering and Design, 2016, pp. 297–304.

pp. 1–11.


[45] Cybulska, M. Reconstruction of archaeological textiles. Fibres & Textiles in Eastern Europe, 2010, 18, 3(80), pp. 100–105.

[31] Zhao S., Jakob W., Marschner S., Bala K. Structure-aware synthesis for predictive woven fabric appearance. In: ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH, New York: ACM, 2012, 31(4), article no. 75. DOI: 10.1145/2185520.2185571.

[32] Zhao S., Hašan M., Ramamoorthi R., Bala K. Modular flux transfer: efficient rendering of high-resolution volumes with repeated structures. In: ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings, New York: ACM, 2013, 32(4), art. no.

[33] Zhao S., Luan F., Bala K. Fitting procedural yarn models for realistic cloth rendering. In: ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH, July 2016,

[34] Zhang J., Baciu G., Zheng D., Liang C., Li G., Hu J. IDSS: a novel representation for woven fabrics. IEEE Transactions on Visualisation and Computer Graphics, 2013, 19(3),

[35] Schröder K., Zinke A., Klein R. Image-based reverse engineering and visual prototyping of woven cloth. IEEE Transactions on Visualization and Computer Graphics, 2015, 21(2),

[36] Heckbert P. S. Survey of texture mapping. IEEE Computer Graphics and Applications,

[37] Haindl M., Filip J. Visual Texture. Accurate Material Appearance Measurement, Representation and Modeling. 1st ed., London: Springer-Verlag, 2013, 284p. DOI: 10.1007/

[38] Catmull E. A Subdivision Algorithm for Computer Display of Curved Surfaces, PhD

[39] Blinn J. F. Simulation of wrinkled surfaces. ACM SIGGRAPH Computer Graphics, ACM

[40] Max N. L., Becker B. G. Bump shading for volume textures, IEEE Computer Graphics

[41] Cook L. R. Shade trees. ACM SIGGRAPH Computer Graphics, 1984, 18(3), pp. 223–231,

[42] Cant R., Shrubsole P. A. Texture potential MIP mapping, a new high-quality texture antialiasing algorithm. ACM Transactions on Graphics, 19(3), July 2000, pp. 164–184.

[43] Williams W. Pyramidal parametrics, ACM SIGGRAPH Computer Graphics, 1983, 17(3),

[44] Gabrijelčič T. H., Pivar M., Kočevar T.N. Definition of the workflow for 3d computer aided reconstruction of a lace. In: Simončič B. (ed.), Tomšič B. (ed.), Gorjanc M. (ed.). Proceedings, 16th World Textile Conference AUTEX 2016, June 8–10, 2016, Ljubljana, Slovenia, Ljubljana: Faculty of Natural Sciences and Engineering, Department of Textiles,

New York: ACM, 2016, 35(4), art. no. 51. DOI: 10.1145/2897824.2925932.

November 1986, 6(11), pp. 56–67. DOI: 10.1109/MCG.1986.276672.

New York, 1978, 12(3), pp. 286–292. DOI: 10.1145/965139.507101.

and Applications, 1994, 14(4), pp. 18–20. DOI: 10.1109/38.291525.

131. DOI: 10.1145/2461912.2461938.

64 Computer Simulation

pp. 420–432. DOI: 10.1109/TVCG.2012.66.

978-1-4471-4902-6.

pp. 188–200. DOI: 10.1109/TVCG.2014.2339831.

dissertation, Salt Lake City: University of Utah, 1974.

ACM New York. DOI: 10.1145/964965.808602.

DOI: 10.1145/353981.353991.

pp. 1–11. DOI: 10.1145/964967.801126.

Graphic Arts and Design, 2016, p. 8.


**Chapter 4**

**Textile Forms' Computer Simulation Techniques**

Computer simulation techniques of textile forms already represent an important tool for textile and garment designers, since they offer numerous advantages, such as quick and simple introduction of changes while developing a model in comparison with conventional techniques. Therefore, the modeling and simulation of textile forms will always be an important issue and challenge for the researchers, since close‐to‐reality models are essential for understanding the performance and behavior of textile materials. This chapter deals with computer simulation of different textile forms. In the introductory part, it reviews the development of complex modeling and simulation techniques related to different textile forms. The main part of the chapter focuses on study of the fabric and fused panel drape by using the finite element method and on development of some representative textile forms, above all, on functional and protective clothing for persons who are sitting during performing different activities. Computer simulation techniques and scanned 3D body models in a sitting posture are used for this purpose. Engineering approaches to textile forms' design for particular purposes, presented in this chapter, show benefits and limitations of specific 3D body scanning and computer simulation

**Keywords:** textile forms, computer simulation techniques, 3D scanning, 3D body

The chapter entitled textile forms' computer simulation techniques is intended for raising awareness about the importance of modeling and simulation of different textile forms. Since textile objects are very common and omnipresent all‐around us, the appropriate simulation techniques can be stated as a very important part of a wide area of computer simulation.

> © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Andreja Rudolf, Slavica Bogović, Beti Rogina Car,

Additional information is available at the end of the chapter

techniques and outline the future research challenges.

Andrej Cupar, Zoran Stjepanovič and

Simona Jevšnik

http://dx.doi.org/10.5772/67738

**Abstract**

models

**1. Introduction**

## **Textile Forms' Computer Simulation Techniques**

Andreja Rudolf, Slavica Bogović, Beti Rogina Car, Andrej Cupar, Zoran Stjepanovič and Simona Jevšnik

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/67738

#### **Abstract**

Computer simulation techniques of textile forms already represent an important tool for textile and garment designers, since they offer numerous advantages, such as quick and simple introduction of changes while developing a model in comparison with conventional techniques. Therefore, the modeling and simulation of textile forms will always be an important issue and challenge for the researchers, since close‐to‐reality models are essential for understanding the performance and behavior of textile materials. This chapter deals with computer simulation of different textile forms. In the introductory part, it reviews the development of complex modeling and simulation techniques related to different textile forms. The main part of the chapter focuses on study of the fabric and fused panel drape by using the finite element method and on development of some representative textile forms, above all, on functional and protective clothing for persons who are sitting during performing different activities. Computer simulation techniques and scanned 3D body models in a sitting posture are used for this purpose. Engineering approaches to textile forms' design for particular purposes, presented in this chapter, show benefits and limitations of specific 3D body scanning and computer simulation techniques and outline the future research challenges.

**Keywords:** textile forms, computer simulation techniques, 3D scanning, 3D body models

## **1. Introduction**

The chapter entitled textile forms' computer simulation techniques is intended for raising awareness about the importance of modeling and simulation of different textile forms. Since textile objects are very common and omnipresent all‐around us, the appropriate simulation techniques can be stated as a very important part of a wide area of computer simulation.

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Textiles can be generally divided into linear (fibers, yarns, threads), two‐dimensional (woven, knitted fabrics, nonwovens) and three‐dimensional (garments, architectural textiles, some technical textiles) textile forms. Modeling and simulation of these forms can be extremely complex due to their visco‐elastic properties and unique behavior when exposed to forces, even though relatively small ones, such as gravitational force.

To simulate textile forms, different modeling approaches are developed to model the structure of textile materials. These are either geometrical, physical, or hybrid models [1–3]. In the virtual environment, interactions between the textile forms and other 3D objects (collision detection and collision response) are of great importance, therefore, should be also computed. When the 3D shape of the textile form is computed, it could be rendered for visualization.

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

69

The geometrically based approaches are used when the final shape of the textile form is needed without considering the dynamic process. They are simple and have low cost of

In the computer graphics, the fabric simulations were first performed by using the geometrical models. These models represent simple geometrical formulations of the fabric without its physical and mechanical properties on local surfaces. Therefore, they are unsuitable for complex reproducible fabric simulation. They focused on appearance of the geometrical shape, particularly folds and wrinkles, which are presented by geometrical equations [1, 2]. The first attempt to the fabric simulation by using a geometrical model was presented by Weil in 1986 [2, 4]. He suggested a method for simulation of the hanging fabric as a mesh of points. The simulation of its shape was carried out by fitting of the catenary curves between

The research studies regarding the garments simulation by using the geometrical modeling can be found in 1990. A method for modeling of the sleeve on a bending arm was based on a hollow cylinder that consists of a series of circular rings and with a displacement of circular rings along the axial direction, the folds are formed [1, 2]. A method for designing of the 3D garment directly on a 3D digitized shape of the human body was presented by Hinds and McCartney [5]. They represented the garment as a collection of 3D surfaces (pattern pieces) of complex shape around the static 3D body, whose fit around panel edges with respect to body form may vary over the surface. A geometrical approach for modeling of the fabric folds on the sleeve was proposed by Ng and Grimsdale [6], where a set of rules was developed for

In general, the geometrically based simulation techniques are used for modeling of the fabric and garment drape. Therefore, they are effective in computing the shape of fabrics or garments. However, the geometrically based techniques usually do not take into account the dynamic interaction between the fabric/garment and the object, because they are difficult to

Stumpp et al. [7] proposed a geometric deformation model for the efficient and robust simulation of garments. With this model a high stretching, shearing, and low bending resistance could be modeled, and the deformed region can be restored back to its original shape. Therefore, it can be used for modeling of the interaction between the fabric and other objects of different shapes. Researchers had represented the physically plausible dynamics of their approach with a com‐ parison to a traditional physically based deformation model, **Figure 1**. The results show that

the similar fabric properties can be reproduced with both models.

**2.1. Geometrical models**

the hanging or constraint points.

automatic generation of the fold lines.

be geometrically modeled [1].

computation [1].

The topics, referring to computer‐based simulation of textile‐based objects, have already been investigated by a number of researchers and authors. Many of them have developed or applied different methods and models for describing the structure and behavior of textile forms. However, there is still lack of newer publications, dealing with the intriguing phenomena, related to modeling and simulation of complex textiles, garments, and other textile forms.

The modeling and simulation techniques for woven and knitted fabrics and garments are dis‐ cussed in view of accurate understanding of the relationship between the construction and behavior of textile forms as a key in engineering of textile forms' design for intended applications. Three approaches for modeling of textile forms are presented, i.e., geometry‐based, physically based, and hybrid approach. Computer simulation of textile forms in interaction with the 3D objects and its importance for the development of specific, custom‐designed garments are con‐ sidered from the perspective of virtual prototyping on the 3D scanned body models.

The chapter introduces three case studies, related to recent advances in engineering approach to computer simulation techniques in the field of textile science. The first one deals with modeling and simulation of textile fabrics and fused panels based on finite element method. The second case study refers to simulation of garments using the sitting 3D human body models using one of the commercial 3D CAD packages. Topic of the third case study is connected with application of simulation techniques used for developing of a special form of protective clothing for sport aircraft pilots—the so‐called antig suit.

## **2. Complex modeling and simulation techniques of textile forms**

The visual appearances of the textile forms for both the real and the virtual are influenced by:


The realistic behavior of textile materials in the virtual environment mainly depends on computer‐based models of the textile materials. They are used for simulation of different textile forms such as tablecloths, flags, garments, shoes, etc. with the aim to study the behavior of textile materials in the virtual environment or to develop the complex shape of the textile forms, which consist of two or more pattern pieces.

The simulation techniques of textile forms should be stable and fast so that they can be applied in different environments, and interactive performance can be achieved [1].

To simulate textile forms, different modeling approaches are developed to model the structure of textile materials. These are either geometrical, physical, or hybrid models [1–3]. In the virtual environment, interactions between the textile forms and other 3D objects (collision detection and collision response) are of great importance, therefore, should be also computed. When the 3D shape of the textile form is computed, it could be rendered for visualization.

## **2.1. Geometrical models**

Textiles can be generally divided into linear (fibers, yarns, threads), two‐dimensional (woven, knitted fabrics, nonwovens) and three‐dimensional (garments, architectural textiles, some technical textiles) textile forms. Modeling and simulation of these forms can be extremely complex due to their visco‐elastic properties and unique behavior when exposed to forces,

The topics, referring to computer‐based simulation of textile‐based objects, have already been investigated by a number of researchers and authors. Many of them have developed or applied different methods and models for describing the structure and behavior of textile forms. However, there is still lack of newer publications, dealing with the intriguing phenomena, related to modeling and simulation of complex textiles, garments, and other textile forms.

The modeling and simulation techniques for woven and knitted fabrics and garments are dis‐ cussed in view of accurate understanding of the relationship between the construction and behavior of textile forms as a key in engineering of textile forms' design for intended applications. Three approaches for modeling of textile forms are presented, i.e., geometry‐based, physically based, and hybrid approach. Computer simulation of textile forms in interaction with the 3D objects and its importance for the development of specific, custom‐designed garments are con‐

The chapter introduces three case studies, related to recent advances in engineering approach to computer simulation techniques in the field of textile science. The first one deals with modeling and simulation of textile fabrics and fused panels based on finite element method. The second case study refers to simulation of garments using the sitting 3D human body models using one of the commercial 3D CAD packages. Topic of the third case study is connected with application of simulation techniques used for developing of a special form of

sidered from the perspective of virtual prototyping on the 3D scanned body models.

**2. Complex modeling and simulation techniques of textile forms**

The visual appearances of the textile forms for both the real and the virtual are influenced by:

(a) the shapes of the three‐dimensional (3D) textile forms determined by the corresponding

(b) used textile materials that which behavior is influenced by their mechanical and physical

The realistic behavior of textile materials in the virtual environment mainly depends on computer‐based models of the textile materials. They are used for simulation of different textile forms such as tablecloths, flags, garments, shoes, etc. with the aim to study the behavior of textile materials in the virtual environment or to develop the complex shape of the textile

The simulation techniques of textile forms should be stable and fast so that they can be applied

in different environments, and interactive performance can be achieved [1].

protective clothing for sport aircraft pilots—the so‐called antig suit.

two‐dimensional (2D) pattern pieces and

forms, which consist of two or more pattern pieces.

properties.

68 Computer Simulation

even though relatively small ones, such as gravitational force.

The geometrically based approaches are used when the final shape of the textile form is needed without considering the dynamic process. They are simple and have low cost of computation [1].

In the computer graphics, the fabric simulations were first performed by using the geometrical models. These models represent simple geometrical formulations of the fabric without its physical and mechanical properties on local surfaces. Therefore, they are unsuitable for complex reproducible fabric simulation. They focused on appearance of the geometrical shape, particularly folds and wrinkles, which are presented by geometrical equations [1, 2].

The first attempt to the fabric simulation by using a geometrical model was presented by Weil in 1986 [2, 4]. He suggested a method for simulation of the hanging fabric as a mesh of points. The simulation of its shape was carried out by fitting of the catenary curves between the hanging or constraint points.

The research studies regarding the garments simulation by using the geometrical modeling can be found in 1990. A method for modeling of the sleeve on a bending arm was based on a hollow cylinder that consists of a series of circular rings and with a displacement of circular rings along the axial direction, the folds are formed [1, 2]. A method for designing of the 3D garment directly on a 3D digitized shape of the human body was presented by Hinds and McCartney [5]. They represented the garment as a collection of 3D surfaces (pattern pieces) of complex shape around the static 3D body, whose fit around panel edges with respect to body form may vary over the surface. A geometrical approach for modeling of the fabric folds on the sleeve was proposed by Ng and Grimsdale [6], where a set of rules was developed for automatic generation of the fold lines.

In general, the geometrically based simulation techniques are used for modeling of the fabric and garment drape. Therefore, they are effective in computing the shape of fabrics or garments. However, the geometrically based techniques usually do not take into account the dynamic interaction between the fabric/garment and the object, because they are difficult to be geometrically modeled [1].

Stumpp et al. [7] proposed a geometric deformation model for the efficient and robust simulation of garments. With this model a high stretching, shearing, and low bending resistance could be modeled, and the deformed region can be restored back to its original shape. Therefore, it can be used for modeling of the interaction between the fabric and other objects of different shapes. Researchers had represented the physically plausible dynamics of their approach with a com‐ parison to a traditional physically based deformation model, **Figure 1**. The results show that the similar fabric properties can be reproduced with both models.

number of points is defined according to the used problems and techniques. For example, a piece of fabric could be modeled as a two‐dimensional arrangement of particles, which conceptually representing the intersection points of the warp and weft yarns within a fabric

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

71

The first particle systems for fabric simulation are based on a form of the mesh [11, 12]. By using this simulation approach, the fabric drape as a dynamic phenomenon was simulated very realistic on the table, on the sphere, etc. These types of simulations have already been reflected by the nonlinear behavior similar to continuum mechanics models. Their accuracy was limited for simulation of large deformations and required longer computa‐ tion times. Furthermore, faster models based on spring‐mass meshes were developed, and for computation were used to fast implicit numerical integration methods [3, 13]. In this modeling technique, an object is represented as a collection of mass points (circle particle) that are interconnected by structural, bend, and shear springs through a mesh structure. The mass points are interconnected by linear springs within the position and velocity at a certain time and mass [1, 10, 14]. The way the springs are connecting the particles (the topology of the object) and the differences in strength of each spring influence the behavior of the object as a whole. The different type of mesh topologies has been presented with respect to the connec‐ tion between the mass and springs [15]. These are rectangular mesh that was first proposed in 1995 by Provot [16], **Figure 2(a)**, responsive mesh was described by Choi and Ko [17], **Figure 2(b)**, triangular mesh was presented by Selle et al. [18], **Figure 2(c)**, and the simplified

The finite elements method (FEM) is widely used for mechanical simulation and numeri‐ cal analysis. It is based on the usage of matrix algebra. The finite‐element method was been developed in a particular scientific disciplines with a wide possibility of solving various prob‐

(a) (b) (c) (d)

weave, thus can be plain, twill, satin weave, etc. [10].

mesh Hu et al. [19], **Figure 2(d)**.

lems in mathematics, physics, continuum mechanics, etc.

**Figure 2.** Mesh topologies of the mass spring model [15].

*2.2.2. Finite‐element approach*

**Figure 1.** Computer simulation of a fabric piece onto a sphere: (a, b) fabric draping by using the physically‐based approach, (a, b) (c, d) fabric draping by using the geometrically‐based approach [7].

## **2.2. Physical models**

The physically based approaches are widely used for computing the mechanical behavior of textile forms. In addition, in physical‐based methods, deformation is based on the structures and properties of textile materials. The structures and properties of textile forms have been investigated as highly flexible mechanical materials from the 1970s. Mechanical simulation intends to reproduce the virtual fabric surface with given parameters, which are expressed as strain‐stress relationships and described by curves due to different degrees of simplification and approximation. Simple linear models can approximate these curves, whilst the nonlinear analytic functions (polynomials on interval‐defined functions) are used for accurate models. In addition, the fabric mass per surface unit should be considered [3].

The physically based models are independent of geometrical representation. Therefore, with them it is possible to solve complex numerical problems by integrating various constraints. The special attention was been paid to simulation of large deformations of textile forms [1]. The behavior of textile material can be described by a complex system of mathematical equa‐ tions (mechanical laws), which are usually partial differential equations or other types of dif‐ ferential systems. Analytical solutions, which are provided only for a limited class of simple equations and solve only simple situations, are not suitable for complex fabric simulations. Therefore, the numerical methods are implemented into a fabric simulation, which requires discretization, explicit computation of the physical values at precise points in space and time. Space discretization can be achieved through numerical solution techniques (models derived from continuum mechanics), or through the mechanical model itself, as in particle system models [3].

The first approach to simulation of the fabric and deformable surfaces introduces Terzopoulos et al. [1, 3, 7–9]. He obtained the motion of the object's points using the Lagrange's equations of motion that had been first discretized by a finite‐element method. Therefore, there was a need to solve a large system of ordinary differential equations.

#### *2.2.1. Particle‐based approach*

A particle‐based system divides the object (fabric) into small particles on triangular or rectangular mesh. The points are defined as finite masses at the mesh intersections [2]. The number of points is defined according to the used problems and techniques. For example, a piece of fabric could be modeled as a two‐dimensional arrangement of particles, which conceptually representing the intersection points of the warp and weft yarns within a fabric weave, thus can be plain, twill, satin weave, etc. [10].

The first particle systems for fabric simulation are based on a form of the mesh [11, 12]. By using this simulation approach, the fabric drape as a dynamic phenomenon was simulated very realistic on the table, on the sphere, etc. These types of simulations have already been reflected by the nonlinear behavior similar to continuum mechanics models. Their accuracy was limited for simulation of large deformations and required longer computa‐ tion times. Furthermore, faster models based on spring‐mass meshes were developed, and for computation were used to fast implicit numerical integration methods [3, 13]. In this modeling technique, an object is represented as a collection of mass points (circle particle) that are interconnected by structural, bend, and shear springs through a mesh structure. The mass points are interconnected by linear springs within the position and velocity at a certain time and mass [1, 10, 14]. The way the springs are connecting the particles (the topology of the object) and the differences in strength of each spring influence the behavior of the object as a whole. The different type of mesh topologies has been presented with respect to the connec‐ tion between the mass and springs [15]. These are rectangular mesh that was first proposed in 1995 by Provot [16], **Figure 2(a)**, responsive mesh was described by Choi and Ko [17], **Figure 2(b)**, triangular mesh was presented by Selle et al. [18], **Figure 2(c)**, and the simplified mesh Hu et al. [19], **Figure 2(d)**.

#### *2.2.2. Finite‐element approach*

**2.2. Physical models**

70 Computer Simulation

models [3].

*2.2.1. Particle‐based approach*

The physically based approaches are widely used for computing the mechanical behavior of textile forms. In addition, in physical‐based methods, deformation is based on the structures and properties of textile materials. The structures and properties of textile forms have been investigated as highly flexible mechanical materials from the 1970s. Mechanical simulation intends to reproduce the virtual fabric surface with given parameters, which are expressed as strain‐stress relationships and described by curves due to different degrees of simplification and approximation. Simple linear models can approximate these curves, whilst the nonlinear analytic functions (polynomials on interval‐defined functions) are used for accurate models.

**Figure 1.** Computer simulation of a fabric piece onto a sphere: (a, b) fabric draping by using the physically‐based approach,

The physically based models are independent of geometrical representation. Therefore, with them it is possible to solve complex numerical problems by integrating various constraints. The special attention was been paid to simulation of large deformations of textile forms [1]. The behavior of textile material can be described by a complex system of mathematical equa‐ tions (mechanical laws), which are usually partial differential equations or other types of dif‐ ferential systems. Analytical solutions, which are provided only for a limited class of simple equations and solve only simple situations, are not suitable for complex fabric simulations. Therefore, the numerical methods are implemented into a fabric simulation, which requires discretization, explicit computation of the physical values at precise points in space and time. Space discretization can be achieved through numerical solution techniques (models derived from continuum mechanics), or through the mechanical model itself, as in particle system

The first approach to simulation of the fabric and deformable surfaces introduces Terzopoulos et al. [1, 3, 7–9]. He obtained the motion of the object's points using the Lagrange's equations of motion that had been first discretized by a finite‐element method. Therefore, there was a

A particle‐based system divides the object (fabric) into small particles on triangular or rectangular mesh. The points are defined as finite masses at the mesh intersections [2]. The

In addition, the fabric mass per surface unit should be considered [3].

(a, b) (c, d) fabric draping by using the geometrically‐based approach [7].

need to solve a large system of ordinary differential equations.

The finite elements method (FEM) is widely used for mechanical simulation and numeri‐ cal analysis. It is based on the usage of matrix algebra. The finite‐element method was been developed in a particular scientific disciplines with a wide possibility of solving various prob‐ lems in mathematics, physics, continuum mechanics, etc.

**Figure 2.** Mesh topologies of the mass spring model [15].

FEM is mainly used for application of elastic solid or shell modeling for mechanical engineer‐ ing purposes, where linear elasticity and small deformations appears. Therefore, it is not so well adapted for fabrics, which is very deformable object. Early attempts to fabric modeling by using FEM showed high computation times [3, 20]. However, it was discovered that by using appropriate simplification and efficient algorithms the FEM is also usable in interactive graphics applications on the field of textile engineering [3]. The research studies regarding the finite element analysis applied to fabrics are well described in source [21].

For solving problems with FEM, it could use different computer programs such as ANSYS, ABAQUS.

Solving of the mechanical problems with finite element method proceeds in several steps [22]:


For the analysis of problems using the finite element method, it is first necessary to create the so‐ called geometric model of the real problem. This step is followed by the discretization of a prob‐ lem on one‐, two‐ or three‐dimensional elements, depending of the structure [22]. When building the finite elements mesh, we have to assure that the mesh fits the structure of the geometric body. The elements should be selected in a way that their shape suites the form of the body. Depending on their form, the elements are divided in liner, shell and volume elements, beam elements, membrane elements, spring, and damper elements and infinite elements, **Figure 3**.

Each finite element has a certain number of nodes, which determine the geometry and position of each individual element, **Figure 4**. The elements are interconnected via nodes and form a finite element mesh.

If we observe the displacement *u* in the element, we can see that, taking into account a generally designed and loaded model, it changes depending on the coordinates. This scenario is unknown in advance and is given in the form [24]:

$$\mathbf{\dot{x}}(\boldsymbol{\mu}) = \begin{Bmatrix} \mathbf{u} \\ \mathbf{v} \\ \mathbf{w} \end{Bmatrix} = \begin{Bmatrix} \mathbf{f}(\mathbf{x}, \mathbf{y}, \mathbf{z}) \\ \mathbf{g}(\mathbf{x}, \mathbf{y}, \mathbf{z}) \\ \mathbf{h}(\mathbf{x}, \mathbf{y}, \mathbf{z}) \end{Bmatrix} \tag{1}$$

For any chosen finite element, Eq. (1) can be written in the matrix form [24]:

$$\begin{array}{rcl} \begin{Bmatrix} \mathbf{u} \end{Bmatrix} &=& \begin{Bmatrix} \mathbf{a} \end{Bmatrix} \begin{Bmatrix} \mathbf{c} \end{Bmatrix} \\ \end{array} \tag{2}$$

 {**F**} = [ **K**]{**U}** + { **FT**} (3) where [**K**] is the stiffness matrix of the finite element, which depends on the shape func‐ tions of the used elements and on rheological parameters (Young's modulus, shear modulus, Poisson's number), and {**F**T} is the load vector, which takes into account the temperature of

If the temperature by observing the construction load is neglected, the basic equation for the

 {**F**} = [ **K**]**{U**} (4) The finite elements of a certain space occupy their local coordinate system, and therefore they must be transformed into a so‐called common global coordinate system. This transformation

the load.

finite element is given as [24]:

**Figure 4.** General description of the element.

Volume element

**Figure 3.** Types of finite elements [23].

y

Shell element Membrane

element

v

j

i k

Nodes

Finite element

Beam element Ininite element Stiff element

damper element

Spring and Linear element

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

73

u

x

where {**u**} is the displacement vector in the element, [**a**] the matrix, {**c**} is the vector of constants.

The basic equation of the finite element specifies a link between the node forces, respectively, the vector of external loads {F} and nodal displacements {U} and is given as follows [24]:

**Figure 3.** Types of finite elements [23].

FEM is mainly used for application of elastic solid or shell modeling for mechanical engineer‐ ing purposes, where linear elasticity and small deformations appears. Therefore, it is not so well adapted for fabrics, which is very deformable object. Early attempts to fabric modeling by using FEM showed high computation times [3, 20]. However, it was discovered that by using appropriate simplification and efficient algorithms the FEM is also usable in interactive graphics applications on the field of textile engineering [3]. The research studies regarding the

For solving problems with FEM, it could use different computer programs such as ANSYS,

Solving of the mechanical problems with finite element method proceeds in several steps [22]:

For the analysis of problems using the finite element method, it is first necessary to create the so‐ called geometric model of the real problem. This step is followed by the discretization of a prob‐ lem on one‐, two‐ or three‐dimensional elements, depending of the structure [22]. When building the finite elements mesh, we have to assure that the mesh fits the structure of the geometric body. The elements should be selected in a way that their shape suites the form of the body. Depending on their form, the elements are divided in liner, shell and volume elements, beam elements,

Each finite element has a certain number of nodes, which determine the geometry and position of each individual element, **Figure 4**. The elements are interconnected via nodes and form a

If we observe the displacement *u* in the element, we can see that, taking into account a generally designed and loaded model, it changes depending on the coordinates. This scenario

 {**u**} = [ **a**]{**c}** (2) where {**u**} is the displacement vector in the element, [**a**] the matrix, {**c**} is the vector of constants. The basic equation of the finite element specifies a link between the node forces, respectively, the vector of external loads {F} and nodal displacements {U} and is given as follows [24]:

⎧ ⎪ ⎨ ⎪ ⎩ **f(x, y, z ) g(x, y, z ) h(x, y, z )**

⎫ ⎪ ⎬ ⎪ ⎭

(1)

membrane elements, spring, and damper elements and infinite elements, **Figure 3**.

**u v <sup>w</sup>**} <sup>=</sup>

For any chosen finite element, Eq. (1) can be written in the matrix form [24]:

is unknown in advance and is given in the form [24]:

{*u*} <sup>=</sup> {

finite element analysis applied to fabrics are well described in source [21].

ABAQUS.

72 Computer Simulation

• element equation,

• boundary conditions,

• interpretation of results.

• numerical analysis,

finite element mesh.

• integration,

• discretization of the continuum,

**Figure 4.** General description of the element.

$$\{\mathbf{F}\} = \{\mathbf{K}\}\{\mathbf{U}\} + \{\mathbf{F}^T\} \tag{3}$$

where [**K**] is the stiffness matrix of the finite element, which depends on the shape func‐ tions of the used elements and on rheological parameters (Young's modulus, shear modulus, Poisson's number), and {**F**T} is the load vector, which takes into account the temperature of the load.

If the temperature by observing the construction load is neglected, the basic equation for the finite element is given as [24]:

$$\begin{array}{c} \{\mathbf{F}\} = \{\mathbf{K}\} \{\mathbf{U}\} \end{array} \tag{4}$$

The finite elements of a certain space occupy their local coordinate system, and therefore they must be transformed into a so‐called common global coordinate system. This transformation of the equations of individual elements from the local coordinate system (x, y, z) in the global system (X, Y, Z) is effected by transformation matrices. The equation, which defines the rule of the transformation of the equation of a finite element from the local to the global coordinate system, is given in the following form [24]:

$$
\begin{array}{ccccc}
\circ & \circ & \circ & \circ \\
& & & \circ & \circ \\
& & & \circ & \circ & \circ
\end{array}
\tag{5}
$$

each node. Thus, for example, the structural equation for a linear element, which always has

[**¯ K**ii] [ **¯ K**ij] **¯ Ui ¯ Uj**

} (8)

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

75

**[ ¯ Kjj]**]{

[ **¯ K**ji]

*method*. The convergence of this method is very effective for well‐selected initial values.

Hybrid models combine geometrically based and physically based techniques. In this method, the rough shape of the textile form is computed based on the geometrical model and physi‐ cally based methods are then employed to refine the final shape of the textile form, which is

The work by Rudomin [26] proposes to shorten the computation time needed in traditional physical techniques by using the geometrical approximation as a starting condition. Kunii and Gotoda [27] proposed a hybrid method for simulation of the fabric wrinkling. The fabric physi‐ cal model in this method consists of spring connecting points, metric energy, and curvature energy. During the fabric simulation, the shape of the fabric was obtained by using a gradient descent method to find the energy minima. After that, singularity theory was used to charac‐ terize the resulting wrinkles. In the next approach proposed by Taillefer [2, 28], the hanging fabric's folds between two hanging points were characterized horizontal and vertical. The hori‐ zontal folds were modeled by using catenaries, whilst vertical fold were modeled by using the relaxation process, similar as suggested in the work by Wail [4]. In addition, also other hybrid

models for simulation of fabric folds were proposed in works described in Ref. [2].

**3.1. Case study 1: computer simulation of fabric and fused panel drape by using FEM**

A number of modeling and simulation techniques have been used for representation of textile woven and knitted fabrics. Each of them has certain advantages, but also restrictions and

Each structure must be stable supported. Therefore, a part of the nodal displacements and rotations is known. They represent boundary conditions. In the finite‐element method, the forces, which act on a structure, are always given in the global coordinate system and are introduced into the equation of the structure through the elements' nodes [24]. Numerical analysis represents the way of solving of equilibrium equations, or so‐called structure equation [25]. Numerical analysis becomes very complex when it comes to solving nonlinear problems, because in solving equilibrium equations we have to take into account also the change in the geometry of the body in order to obtain the correct solution. Nonlinear models can only contain a few or an extremely large number of variables. Thus, instead of one solution, which is obtained from the linear problems, nonlinear problems are solved iteratively, since during computing the stiffness matrix and the shape of the deformation of the body are changing. The most frequently used iterative method for finding the roots in multidimensional spaces is a *Newton‐Raphson* 

{**¯ F**i} {¯

**<sup>F</sup>**j}} <sup>=</sup> [

two nodes, is given in the form [24]:

{

**2.3. Hybrid models**

computationally efficient [1].

**3. Textile forms' computer simulation**

where {**¯ <sup>F</sup>**} is the nodal forces given in the global coordinate system, {**¯U**} is the nodal displace‐ ments given in the global coordinate system.

The system, which is discretized into finite elements, has *e* elements and *n* nodes, **Figure 5**. The equation of the system is obtained by aggregating all the equation of the expression Eq. (5) for all the elements *e* in the total equation, symbolically given as [24]*:*

$$\mathbf{F} = \mathbf{K} \cdot \mathbf{U} \tag{6}$$

Aggregation takes place in a way to combine all the elements of the matrices belonging to the common node.

Vectors of nodal forces **F** and nodal displacements **U** are further written in the following form [24]:

$$\begin{aligned} \text{bound} \left[ \begin{smallmatrix} \mathbf{A} \bullet \end{smallmatrix} \right] . \end{aligned} \qquad \qquad \qquad \begin{Bmatrix} \mathbf{\overline{F}}\_1 \end{Bmatrix} \qquad \qquad \begin{Bmatrix} \mathbf{\overline{U}}\_1 \end{Bmatrix} \\ \qquad \qquad \begin{Bmatrix} \mathbf{\overline{F}}\_2 \end{Bmatrix} \qquad \qquad \begin{Bmatrix} \mathbf{\overline{U}}\_2 \end{Bmatrix} \\ \qquad \mathbf{F} = \begin{Bmatrix} \mathbf{\overline{F}}\_3 \end{Bmatrix} . \bullet \quad \mathbf{U} = \begin{Bmatrix} \mathbf{\overline{U}}\_3 \end{Bmatrix} \\ \qquad \qquad \begin{Bmatrix} \mathbf{\overline{F}}\_n \end{Bmatrix} \qquad \qquad \begin{Bmatrix} \mathbf{\overline{U}}\_n \end{Bmatrix} \qquad \qquad \begin{Bmatrix} \mathbf{\overline{U}}\_n \end{Bmatrix} \end{Bmatrix} \tag{7}$$

where the submatrices {¯ Fi } and {¯<sup>U</sup><sup>i</sup> } have such number of components as the degrees of freedom of a certain node.

*<sup>K</sup>* is the stiffness matrix consisting of n × n submatrices [¯<sup>K</sup>rs] and is determined based on the stiffness matrix of individual elements, when they are divided into submatrices that belong to

**Figure 5.** Discretization of the geometrical model.

each node. Thus, for example, the structural equation for a linear element, which always has two nodes, is given in the form [24]:

$$
\begin{Bmatrix}
\begin{Bmatrix} \cdot\\ \left\{\mathbf{F}\_{i}\right\} \end{Bmatrix} \\
\begin{Bmatrix} \mathbf{F}\_{i} \end{Bmatrix}
\end{Bmatrix} = \begin{bmatrix}
\begin{bmatrix} \mathbf{K}\_{\text{u}} \end{bmatrix} \begin{bmatrix} \mathbf{K}\_{\text{u}} \end{bmatrix} \\
\begin{bmatrix} \mathbf{K}\_{\text{v}} \end{bmatrix} \begin{bmatrix} \mathbf{K}\_{\text{u}} \end{bmatrix}
\end{Bmatrix} \begin{Bmatrix} \mathbf{U}\_{i} \\ \mathbf{U}\_{i} \end{Bmatrix} \tag{8}
$$

Each structure must be stable supported. Therefore, a part of the nodal displacements and rotations is known. They represent boundary conditions. In the finite‐element method, the forces, which act on a structure, are always given in the global coordinate system and are introduced into the equation of the structure through the elements' nodes [24]. Numerical analysis represents the way of solving of equilibrium equations, or so‐called structure equation [25]. Numerical analysis becomes very complex when it comes to solving nonlinear problems, because in solving equilibrium equations we have to take into account also the change in the geometry of the body in order to obtain the correct solution. Nonlinear models can only contain a few or an extremely large number of variables. Thus, instead of one solution, which is obtained from the linear problems, nonlinear problems are solved iteratively, since during computing the stiffness matrix and the shape of the deformation of the body are changing. The most frequently used iterative method for finding the roots in multidimensional spaces is a *Newton‐Raphson method*. The convergence of this method is very effective for well‐selected initial values.

## **2.3. Hybrid models**

of the equations of individual elements from the local coordinate system (x, y, z) in the global system (X, Y, Z) is effected by transformation matrices. The equation, which defines the rule of the transformation of the equation of a finite element from the local to the global coordinate

<sup>F</sup>} <sup>=</sup> [ **¯**

The system, which is discretized into finite elements, has *e* elements and *n* nodes, **Figure 5**. The equation of the system is obtained by aggregating all the equation of the expression

 **F** = **K** ⋅ **U** (6) Aggregation takes place in a way to combine all the elements of the matrices belonging to the

Vectors of nodal forces **F** and nodal displacements **U** are further written in the following

, U =

*<sup>K</sup>* is the stiffness matrix consisting of n × n submatrices [¯<sup>K</sup>rs] and is determined based on the stiffness matrix of individual elements, when they are divided into submatrices that belong to

{¯ U1} {¯ U2} {¯ U3} ⋮ {¯ U**n**}

} have such number of components as the degrees of

{**¯ F**1} {**¯ F**2} {¯ **F**3} ⋮ {**¯ Fn**}

Eq. (5) for all the elements *e* in the total equation, symbolically given as [24]*:*

**K**]{**¯**

**<sup>F</sup>**} is the nodal forces given in the global coordinate system, {**¯U**} is the nodal displace‐

**U**} (5)

(7)

system, is given in the following form [24]: {**¯**

ments given in the global coordinate system.

F =

**Figure 5.** Discretization of the geometrical model.

Fi

} and {¯<sup>U</sup><sup>i</sup>

where the submatrices {¯

freedom of a certain node.

where {**¯**

74 Computer Simulation

common node.

form [24]:

Hybrid models combine geometrically based and physically based techniques. In this method, the rough shape of the textile form is computed based on the geometrical model and physi‐ cally based methods are then employed to refine the final shape of the textile form, which is computationally efficient [1].

The work by Rudomin [26] proposes to shorten the computation time needed in traditional physical techniques by using the geometrical approximation as a starting condition. Kunii and Gotoda [27] proposed a hybrid method for simulation of the fabric wrinkling. The fabric physi‐ cal model in this method consists of spring connecting points, metric energy, and curvature energy. During the fabric simulation, the shape of the fabric was obtained by using a gradient descent method to find the energy minima. After that, singularity theory was used to charac‐ terize the resulting wrinkles. In the next approach proposed by Taillefer [2, 28], the hanging fabric's folds between two hanging points were characterized horizontal and vertical. The hori‐ zontal folds were modeled by using catenaries, whilst vertical fold were modeled by using the relaxation process, similar as suggested in the work by Wail [4]. In addition, also other hybrid models for simulation of fabric folds were proposed in works described in Ref. [2].

## **3. Textile forms' computer simulation**

### **3.1. Case study 1: computer simulation of fabric and fused panel drape by using FEM**

A number of modeling and simulation techniques have been used for representation of textile woven and knitted fabrics. Each of them has certain advantages, but also restrictions and limitations. Although finite element method is mainly applicable in mechanical, civil, and electrical engineering, it has been successfully used also in textile engineering for modeling of textile fabrics and complex multilayered textile forms.

## *3.1.1. Modeling of fabric and fused panel*

Finite element method was used for modeling and simulating of a fabric and fused panel drape [29]. When modeling textile fabrics, we proceeded from the assumption that the fabric is a continuum with homogeneous orthotropic properties. Its structure is defined by the following rheological parameters: the modulus of elasticity in the warp and weft direction, the shear modulus in the warp and weft direction and the Poisson's number. The model of a fused panel is based on theoretical principles of laminate materials [30]. The fused panel is treated as a two‐layer laminate; one lamina was the fabric and the other lamina was the fusible interlining. Fabrics and adhesive interlinings are characterized by local inhomogeneities and anisotropic properties. Therefore, we set up the assumption that interlining is a continuum with average homogeneous and orthotropic properties. Its structural features are described with the same rheological parameters as in the case of fabric.

Simulations of the fabric and fused panels drapes were carried out according to a measuring process using KES methodology for fabric and fused panel. The joint, which connects the fabric and the adhesive interlining, is formed by using the thermoplastic material, and thus forming a matrix of connections of the fused panel. The resulting joint typically is not uniform over the entire surface due to the thermoplastic layer in the form of points [31]. However, we have assumed that the joint was uniformly distributed across the entire surface of the model of the fused panel.

*3.1.3. Results and discussion of modeling and numerical simulations of fabric and fused panel drapes*

**Pedestal**

Shel Shell elements (120) l elements (120)

Pedestal edge

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

77

**Figure 7** represents the model for performing simulations of fabric and fused panel draping. The following parameters were analyzed: maximum and minimum amplitude of folds, maxi‐

The results of draping tests using a measuring device Cusik Drape Tester (five tests) and numerical simulations of a fabric (F‐1), are shown in **Figure 8**. The results of draping tests for two fused panels consisting of the same fabric and two different fusible interlinings (F‐1\_L1

The numerical analysis of draping of fabrics and fused panels was aimed at studying the impact of material properties of woven fabric and fused panel, as components of garments clothing, on their real behavior. Related approaches for modeling of material properties have been used in order to assure the comparability of all analyses. Here, the specimens were

Numerical computation of a problem related to draping of fabrics and fused panels was carried out using static analysis. Static analysis is more favorable than dynamic analysis in terms of com‐ putation time, while there were no significant differences between the obtained draping results. The results of the experimentally obtained forms of draping of woven fabrics show the similarity when comparing with the draping results using numerical simulation with ABAQUS software, **Figure 8**. The figures indicate that the form of the draped fabric in experimental testing is never exactly the same (five tests). Therefore, it is also unrealistic to expect that the form of computer‐simulated draped fabrics would be identical as in the real

exposed to the gravitational force, which caused relatively large displacements.

mum and minimum deflection, and the depth and number of folds.

and F‐1\_L2) are shown in **Figures 9** and **10**.

Fabric/laminate

**1 2**

**5**

S9R5 finite element

**9**

**6**

**Figure 6.** Discretized model for testing the draping of a fabric and fused panel.

**4 3**

**7**

sample

**8**

### *3.1.2. Geometrical model for simulation and numerical analysis of fabric and fused panel drape*

The geometric model for numerical analysis related to draping of a fabrics and laminate was designed in the shape and size of the testing specimen using the measuring device Cusik Drape Tester. The test specimen with a diameter of 300 mm is centrally placed on a horizon‐ tal table/pedestal having a diameter of 180 mm. Thus, 60 mm of a specimen falls freely over the edge of a horizontal base due to its own weight. As a result, the folds in textile specimen are formed.

The geometric model is discretized with 240 finite elements. For this purpose, we have used thin 3D shell elements, type S9R5 [23], **Figure 6**. The part of the sample, which falls freely over the edge of the pedestal, is described by 120 shell elements. The remaining 120 shell elements describe the specimen positioned on the base. The pedestal for testing of the specimen is also modeled at the very edge of the base (for modeling the tangential rotational degree of freedom), **Figure 6**.

The specimen model was observed under the load of its own weight. Newton‐Raphson's iterative method was used for solving the equations of a designed model.

**Figure 6.** Discretized model for testing the draping of a fabric and fused panel.

limitations. Although finite element method is mainly applicable in mechanical, civil, and electrical engineering, it has been successfully used also in textile engineering for modeling of

Finite element method was used for modeling and simulating of a fabric and fused panel drape [29]. When modeling textile fabrics, we proceeded from the assumption that the fabric is a continuum with homogeneous orthotropic properties. Its structure is defined by the following rheological parameters: the modulus of elasticity in the warp and weft direction, the shear modulus in the warp and weft direction and the Poisson's number. The model of a fused panel is based on theoretical principles of laminate materials [30]. The fused panel is treated as a two‐layer laminate; one lamina was the fabric and the other lamina was the fusible interlining. Fabrics and adhesive interlinings are characterized by local inhomogeneities and anisotropic properties. Therefore, we set up the assumption that interlining is a continuum with average homogeneous and orthotropic properties. Its structural features are described with the same rheological parameters as in the case

Simulations of the fabric and fused panels drapes were carried out according to a measuring process using KES methodology for fabric and fused panel. The joint, which connects the fabric and the adhesive interlining, is formed by using the thermoplastic material, and thus forming a matrix of connections of the fused panel. The resulting joint typically is not uniform over the entire surface due to the thermoplastic layer in the form of points [31]. However, we have assumed that the joint was uniformly distributed across the entire surface of the model

*3.1.2. Geometrical model for simulation and numerical analysis of fabric and fused panel drape*

The geometric model for numerical analysis related to draping of a fabrics and laminate was designed in the shape and size of the testing specimen using the measuring device Cusik Drape Tester. The test specimen with a diameter of 300 mm is centrally placed on a horizon‐ tal table/pedestal having a diameter of 180 mm. Thus, 60 mm of a specimen falls freely over the edge of a horizontal base due to its own weight. As a result, the folds in textile specimen

The geometric model is discretized with 240 finite elements. For this purpose, we have used thin 3D shell elements, type S9R5 [23], **Figure 6**. The part of the sample, which falls freely over the edge of the pedestal, is described by 120 shell elements. The remaining 120 shell elements describe the specimen positioned on the base. The pedestal for testing of the specimen is also modeled at the very edge of the base (for modeling the tangential rotational degree of

The specimen model was observed under the load of its own weight. Newton‐Raphson's

iterative method was used for solving the equations of a designed model.

textile fabrics and complex multilayered textile forms.

*3.1.1. Modeling of fabric and fused panel*

of fabric.

76 Computer Simulation

of the fused panel.

are formed.

freedom), **Figure 6**.

### *3.1.3. Results and discussion of modeling and numerical simulations of fabric and fused panel drapes*

**Figure 7** represents the model for performing simulations of fabric and fused panel draping. The following parameters were analyzed: maximum and minimum amplitude of folds, maxi‐ mum and minimum deflection, and the depth and number of folds.

The results of draping tests using a measuring device Cusik Drape Tester (five tests) and numerical simulations of a fabric (F‐1), are shown in **Figure 8**. The results of draping tests for two fused panels consisting of the same fabric and two different fusible interlinings (F‐1\_L1 and F‐1\_L2) are shown in **Figures 9** and **10**.

The numerical analysis of draping of fabrics and fused panels was aimed at studying the impact of material properties of woven fabric and fused panel, as components of garments clothing, on their real behavior. Related approaches for modeling of material properties have been used in order to assure the comparability of all analyses. Here, the specimens were exposed to the gravitational force, which caused relatively large displacements.

Numerical computation of a problem related to draping of fabrics and fused panels was carried out using static analysis. Static analysis is more favorable than dynamic analysis in terms of com‐ putation time, while there were no significant differences between the obtained draping results.

The results of the experimentally obtained forms of draping of woven fabrics show the similarity when comparing with the draping results using numerical simulation with ABAQUS software, **Figure 8**. The figures indicate that the form of the draped fabric in experimental testing is never exactly the same (five tests). Therefore, it is also unrealistic to expect that the form of computer‐simulated draped fabrics would be identical as in the real

**Figure 7.** Model for simulation of a woven fabric and fused panel.

experiment. The form of a simulated fabric is symmetric due to the fact that the fabric is observed as an orthotropic material, **Figure 8a**. This cannot be expected by experimentally obtained forms of draped fabrics. Asymmetry in folds of tested fabrics is due to locally inho‐ mogeneous structure of woven fabrics. The results of fabric draping show a good correla‐ tion between the experimental and calculated numerical values regarding the maximum and minimum deflection of folds, **Figure 8**.

The results of experimental testing and numerical simulation of draping of fused panels (F‐1\_L‐1 and F‐1\_L‐2) show particularly good match for the number and depth of folds, and the maximum amplitude, **Figures 9** and **10**. Comparison between the experimental and numerical results of draping of fused panels showed that the behavior of the simulated fused panels was more rigid (smaller number of folds, lower maximum and minimum deflection and maximum and minimum amplitudes, greater depth of folds), **Figures 9** and **10**. From detailed analysis, it was stated that the cause of such behavior lies in the approach of modeling the joint between the fabric and fused interlining. From the studies of bending it is apparent [32], that, if we have *n*‐laminae, between which there is no connection, their bending stiffness is significantly lower than in the case, if we have *n*‐laminae, fused by joints. The problem can be illustrated using a rectangular beam. Deflection of a beam *f* is inversely proportional to the moment of inertia of a cross‐section of the beam. In the case of a rectangular beam, composed of *n*‐laminae, the proportionality can be expressed in the following form:

 f ∞ ( \_1 <sup>n</sup>∞) (9)

**Experimental draping test Draping simulation**

Measurement No. 1

Number of folds 8 Max. delection / mm 59,24 Min . delection / mm 36,13 Fold depth /mm 38,40 Max. amplitude / mm 137,9 Min. amplitude / mm 99,5

Measurement No. 2

Number of folds 7 Max. delection / mm 59,85 Min . delection / mm 35,73 Fold depth /mm 44,00 Max. amplitude / mm 138,2 Min. amplitude / mm 94,2

Measurement No. 3

Number of folds 7 Max. delection / mm 59,70 Min . delection / mm 33,90 Fold depth /mm 43,60 Max. amplitude / mm 139,5 Min. amplitude / mm 95,9

Measurement No. 4

Number of folds 8 Max. delection / mm 59,70 Min . delection / mm 37,79 Fold depth /mm 40,70 Max. amplitude / mm 136,6 Min. amplitude / mm 95,9

**Figure 8.** Results of experimental testing and numerical simulations of a fabric F‐1.

**parameters Different views** 

a

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

79

b

c

d

Measurement No. 5 **Calculated parameters** Number of folds 8 Exper. Simul. Diff. (%)

Max. delection / mm 59,70 Max. delection / mm 59,64 59,7 0,06 Min . delection / mm 38,04 Min . delection / mm 36,32 36,4 0,08 Fold depth /mm 40,50 Fold depth /mm 41,44 36,32 5,12 Max. amplitude / mm 136,4 Max. amplitude / mm 137,72 149,25 11,53 Min. amplitude / mm 95,9 Min. amplitude / mm 96,28 112,93 16,65

Number of folds 7,6 6 1,6

**Fabric code: F-1 Measured / calculated**

where n is the number of laminae, α is the joint quality parameter, limited as follows: 1≤ α ≤ 3.

In case α equals 1, the laminae are not joined. If α equals 3, laminae are joined. The values of α are closer to 1 in fused panels with pointed deposit of glue. In fusible interlinings having a thermoplastic material applied in paste form, α is closer to 3. This was taken into account when modeling the joints of fused panels corresponding to the difference, which occurred in the observation and comparison of experimental and numerical results.

It can be concluded that draping represents a spatial problem. Therefore, for realistic modeling of draping of fused panels, it is necessary to carry out preliminary modeling of the joints with

**Figure 8.** Results of experimental testing and numerical simulations of a fabric F‐1.

experiment. The form of a simulated fabric is symmetric due to the fact that the fabric is observed as an orthotropic material, **Figure 8a**. This cannot be expected by experimentally obtained forms of draped fabrics. Asymmetry in folds of tested fabrics is due to locally inho‐ mogeneous structure of woven fabrics. The results of fabric draping show a good correla‐ tion between the experimental and calculated numerical values regarding the maximum and

The results of experimental testing and numerical simulation of draping of fused panels (F‐1\_L‐1 and F‐1\_L‐2) show particularly good match for the number and depth of folds, and the maximum amplitude, **Figures 9** and **10**. Comparison between the experimental and numerical results of draping of fused panels showed that the behavior of the simulated fused panels was more rigid (smaller number of folds, lower maximum and minimum deflection and maximum and minimum amplitudes, greater depth of folds), **Figures 9** and **10**. From detailed analysis, it was stated that the cause of such behavior lies in the approach of modeling the joint between the fabric and fused interlining. From the studies of bending it is apparent [32], that, if we have *n*‐laminae, between which there is no connection, their bending stiffness is significantly lower than in the case, if we have *n*‐laminae, fused by joints. The problem can be illustrated using a rectangular beam. Deflection of a beam *f* is inversely proportional to the moment of inertia of a cross‐section of the beam. In the case of a rectangular beam, composed

\_1

where n is the number of laminae, α is the joint quality parameter, limited as follows: 1≤ α ≤ 3. In case α equals 1, the laminae are not joined. If α equals 3, laminae are joined. The values of α are closer to 1 in fused panels with pointed deposit of glue. In fusible interlinings having a thermoplastic material applied in paste form, α is closer to 3. This was taken into account when modeling the joints of fused panels corresponding to the difference, which occurred in

It can be concluded that draping represents a spatial problem. Therefore, for realistic modeling of draping of fused panels, it is necessary to carry out preliminary modeling of the joints with

<sup>n</sup>∞) (9)

of *n*‐laminae, the proportionality can be expressed in the following form:

the observation and comparison of experimental and numerical results.

f ∞ (

minimum deflection of folds, **Figure 8**.

78 Computer Simulation

**Figure 7.** Model for simulation of a woven fabric and fused panel.


respect to the function of friction. In our case, the friction depends on the normal force in the joint, as well as on relative displacement between the two layers. From experimental studies [31], it can be concluded that the rheological model functionally depends on the deformation

Measurement No. 5 **Calculated parameters** Number of folds 5 Exper. Simul. Diff. (%)

Max. delection / mm 51,78 Max. delection / mm 51,8 48,82 5,75 Min . delection / mm 26,76 Min . delection / mm 26,74 22,94 14,21 Fold depth /mm 19,50 Fold depth /mm 22,72 24,3 6,95 Max. amplitude / mm 143,7 Max. amplitude / mm 143,66 140,1 2,48 Min. amplitude / mm 124,2 Min. amplitude / mm 120,94 115,8 4,25

Number of folds 5,8 6 3,45

3D scanning and computer simulation techniques were studied for development of individu‐ alized functional garments for wheelchair users from perspective of ergonomic comfort in

**3.2. Case study 2: computer simulation of functional clothes for wheelchair users**

**Figure 10.** Results of experimental testing and numerical simulations of a fused panel F‐1\_L‐2.

**Experimental draping test Draping simulation**

Measurement No. 1

Number of folds 6 Max. delection / mm 51,72 Min . delection / mm 26,15 Fold depth /mm 23,60 Max. amplitude / mm 144,0 Min. amplitude / mm 120,4 Measurement No. 2

Number of folds 6 Max. delection / mm 51,60 Min . delection / mm 27,55 Fold depth /mm 23,22 Max. amplitude / mm 143,3 Min. amplitude / mm 120,1 Measurement No. 3

Number of folds 6 Max. delection / mm 51,90 Min . delection / mm 26,96 Fold depth /mm 23,50 Max. amplitude / mm 143,6 Min. amplitude / mm 120,1 Measurement No. 4

Number of folds 6 Max. delection / mm 52,01 Min . delection / mm 26,76 Fold depth /mm 23,80 Max. amplitude / mm 143,7 Min. amplitude / mm 119,9

**parameters Different views**

a

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

81

b

c

d

**Fabric code: F-1\_L-2 Measured / calculated**

state.

**Figure 9.** Results of experimental testing and numerical simulations of a fused panel F‐1\_L‐1.


**Experimental draping test Draping simulation**

Measurement No. 1

Number of folds 7 Max. delection / mm 56,31 Min . delection / mm 27,93 Fold depth /mm 32,40 Max. amplitude / mm 143,1 Min. amplitude / mm 110,7

Measurement No. 2

Number of folds 7 Max. delection / mm 56,60 Min . delection / mm 29,40 Fold depth /mm 32,40 Max. amplitude / mm 142,3 Min. amplitude / mm 109,9

Measurement No. 3

Number of folds 7 Max. delection / mm 57,76 Min . delection / mm 28,86 Fold depth /mm 34,70 Max. amplitude / mm 142,6 Min. amplitude / mm 107,9

Measurement No. 4

Number of folds 7 Max. delection / mm 57,45 Min . delection / mm 29,22 Fold depth /mm 35,10 Max. amplitude / mm 142,4 Min. amplitude / mm 107,3

**Figure 9.** Results of experimental testing and numerical simulations of a fused panel F‐1\_L‐1.

Max. delection / mm 56,77 Max. delection /

Min . delection / mm 31,60 Min . delection /

Max. amplitude / mm 141,0 Max. amplitude /

**parameters Different views** 

a

b

c

d

Measurement No. 5 **Calculated parameters** Number of folds 8 Exper. Simul. Diff. (%)

Fold depth /mm 30,60 Fold depth /mm 33,04 37,40 13,20

Min. amplitude / mm 109,4 Min. amplitude / 109,04 102,1 6,36

Number of folds 7,2 6 16,67

mm 56,98 45,33 20,45

mm 29,40 23,63 19,63

mm 142,28 139,5 1,95

**Fabric code: F-1\_L-1 Measured / calculated**

80 Computer Simulation

**Figure 10.** Results of experimental testing and numerical simulations of a fused panel F‐1\_L‐2.

respect to the function of friction. In our case, the friction depends on the normal force in the joint, as well as on relative displacement between the two layers. From experimental studies [31], it can be concluded that the rheological model functionally depends on the deformation state.

#### **3.2. Case study 2: computer simulation of functional clothes for wheelchair users**

3D scanning and computer simulation techniques were studied for development of individu‐ alized functional garments for wheelchair users from perspective of ergonomic comfort in a sitting posture, functional, and aesthetic requirements and needs regarding their health protection.

Some recent studies have shown that clothes for disabled users should not only be based on various design, fashion and comfort concepts, but should also consider particular medical problems [33–37]. The interviewing conducted in Slovenia among 58 adult respondents revealed that paraplegic wheelchair users are faced also with accompanying health problems because of their primary disease [35]. They are mostly faced with incontinence (66.7%), infection and inflammation of the urinary tract (50.0%), frequent colds (33.0%), while some of them also have pressure sores (14.6%), skin irritations and inflammations (12.5%). The hand pains (50.0%) and leg cramps (41.7%) are common health problems of paraplegics. It is well known that they are faced also with limited mobility of hands, atrophy of the leg muscles, poor blood circulation, and regulation of body temperature of lower extremities [34, 38, 39]. With the respect to the above facts, the paraplegic wheelchair users have difficulties in wear‐ ing regular garments due to their insufficient functionality and protection.

#### *3.2.1. 3D scanning*

Producers of 3D human body scanners usually offer software for visualization of the scanned (standing) 3D body model and automatic extraction of the anthropometrical dimensions based on standard ISO 8559 [43]. However, this software cannot represent scanned sitting body and extract anthropometrical body dimensions automatically. The research on a sitting posture's 3D body model, obtained by scanning with Vitus Smart XXL human body scanner and two general‐purpose optical scanners (GOM Atos II 400 and the Artec™ Eva 3D hand scanner), showed that more appropriate digitized mesh can be achieved using the general purpose optical scanners [34, 40, 41], **Figure 11**. Digitizing was carried out on a rotation chair. The accurate sitting 3D body models were achieved after modeling and reconstruction procedures that are deeply described in a source [34]. In this research, the fully mobile persons were involved to avoid unnecessary burdening of paraplegics at this stage of the research.

and manual measurements indicated that the mobility of a person does not affect the accuracy

**Figure 12.** Scanning of the IMP by using a scanning chair and optical hand scanner Artec Eva 3D, and virtual measuring

IMP

(a) GOM ATOS II 400 optical scanner (b) Artec Eva 3D hand scanner

(c) Virtual measuring of 3D body dimensions

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

83

In this part of the research, study regarding the virtual prototyping of functional garments was carried out on a sitting 3D body models by using the Optitex 3D program. Construction of the garments' pattern designs was based on the virtually measured 3D body models' dimensions. In the study regarding the pants and blouse pattern designs for a sitting posture a reconstruction procedure of the basic patterns was performed in order to obtain well‐fitted garments, which also meet demands of the wheelchair users due to additional health problems (pressure sores, obstruction of the blood flow, inflammation of the urinary tract, etc.) and aesthetic appearance [34]. In this research, a comparison between the real garments and virtual prototypes showed that development of functional garments for a sitting posture is extremely suitable when using scanned 3D body models and computer simulation techniques. In addition, experiments have proven a synergistic effect of the computer simulation techniques during the development process of the ergonomic garments and their great potential for use and radical changes in the

of the virtual measurements and sitting posture 3D body models, respectively [36].

(a) Chair for scanning of IMP (b) Scanned and reconstructed 3D body model of

**Figure 11.** Scanning of fully mobile persons in a sitting posture.

of the 3D body dimension [36].

production of custom‐made garments for wheelchair users [36].

*3.2.2. Development of functional garments for wheelchair user by using the computer simulation*

The experiences gained from this study enabled us to include the immobile persons within the research. With respect to the poor body balance due to spine injury and modeling/reconstruc‐ tion procedure of the scanned 3D body models, there was a need to develop a special chair for scanning, adjustable to the individuals' body dimensions, **Figure 12(a)**. Scanning of the paraplegic wheelchair users was performed by using the optical hand scanner Artec Eva 3D. The 3D body models were achieved after modeling and reconstruction procedures described in a source [34]. In addition, a comparative analysis of the scanned 3D bodies' virtual mea‐ surements and manual measurements was performed for fully mobile and immobile persons to find if the poor body balance of the immobile persons affects the accuracy of the 3D body models. During manual measurements, using a measuring tape, locations of the body dimen‐ sions, anthropometrical landmarks and standard procedure for the human body measuring were taken into account according to the standard ISO 8559 [42]. All dimensions of the limbs were measured on the left limbs. The virtual measurements of the scanned 3D body models were performed at the same locations as described for the manual measurements using the measuring tools of the OptiTex 3D system, **Figure 12(c)**. The statistical analysis of the virtual

(a) GOM ATOS II 400 optical scanner (b) Artec Eva 3D hand scanner

**Figure 11.** Scanning of fully mobile persons in a sitting posture.

a sitting posture, functional, and aesthetic requirements and needs regarding their health

Some recent studies have shown that clothes for disabled users should not only be based on various design, fashion and comfort concepts, but should also consider particular medical problems [33–37]. The interviewing conducted in Slovenia among 58 adult respondents revealed that paraplegic wheelchair users are faced also with accompanying health problems because of their primary disease [35]. They are mostly faced with incontinence (66.7%), infection and inflammation of the urinary tract (50.0%), frequent colds (33.0%), while some of them also have pressure sores (14.6%), skin irritations and inflammations (12.5%). The hand pains (50.0%) and leg cramps (41.7%) are common health problems of paraplegics. It is well known that they are faced also with limited mobility of hands, atrophy of the leg muscles, poor blood circulation, and regulation of body temperature of lower extremities [34, 38, 39]. With the respect to the above facts, the paraplegic wheelchair users have difficulties in wear‐

Producers of 3D human body scanners usually offer software for visualization of the scanned (standing) 3D body model and automatic extraction of the anthropometrical dimensions based on standard ISO 8559 [43]. However, this software cannot represent scanned sitting body and extract anthropometrical body dimensions automatically. The research on a sitting posture's 3D body model, obtained by scanning with Vitus Smart XXL human body scanner and two general‐purpose optical scanners (GOM Atos II 400 and the Artec™ Eva 3D hand scanner), showed that more appropriate digitized mesh can be achieved using the general purpose optical scanners [34, 40, 41], **Figure 11**. Digitizing was carried out on a rotation chair. The accurate sitting 3D body models were achieved after modeling and reconstruction procedures that are deeply described in a source [34]. In this research, the fully mobile persons were

involved to avoid unnecessary burdening of paraplegics at this stage of the research.

The experiences gained from this study enabled us to include the immobile persons within the research. With respect to the poor body balance due to spine injury and modeling/reconstruc‐ tion procedure of the scanned 3D body models, there was a need to develop a special chair for scanning, adjustable to the individuals' body dimensions, **Figure 12(a)**. Scanning of the paraplegic wheelchair users was performed by using the optical hand scanner Artec Eva 3D. The 3D body models were achieved after modeling and reconstruction procedures described in a source [34]. In addition, a comparative analysis of the scanned 3D bodies' virtual mea‐ surements and manual measurements was performed for fully mobile and immobile persons to find if the poor body balance of the immobile persons affects the accuracy of the 3D body models. During manual measurements, using a measuring tape, locations of the body dimen‐ sions, anthropometrical landmarks and standard procedure for the human body measuring were taken into account according to the standard ISO 8559 [42]. All dimensions of the limbs were measured on the left limbs. The virtual measurements of the scanned 3D body models were performed at the same locations as described for the manual measurements using the measuring tools of the OptiTex 3D system, **Figure 12(c)**. The statistical analysis of the virtual

ing regular garments due to their insufficient functionality and protection.

protection.

82 Computer Simulation

*3.2.1. 3D scanning*

**Figure 12.** Scanning of the IMP by using a scanning chair and optical hand scanner Artec Eva 3D, and virtual measuring of the 3D body dimension [36].

and manual measurements indicated that the mobility of a person does not affect the accuracy of the virtual measurements and sitting posture 3D body models, respectively [36].

#### *3.2.2. Development of functional garments for wheelchair user by using the computer simulation*

In this part of the research, study regarding the virtual prototyping of functional garments was carried out on a sitting 3D body models by using the Optitex 3D program. Construction of the garments' pattern designs was based on the virtually measured 3D body models' dimensions.

In the study regarding the pants and blouse pattern designs for a sitting posture a reconstruction procedure of the basic patterns was performed in order to obtain well‐fitted garments, which also meet demands of the wheelchair users due to additional health problems (pressure sores, obstruction of the blood flow, inflammation of the urinary tract, etc.) and aesthetic appearance [34]. In this research, a comparison between the real garments and virtual prototypes showed that development of functional garments for a sitting posture is extremely suitable when using scanned 3D body models and computer simulation techniques. In addition, experiments have proven a synergistic effect of the computer simulation techniques during the development process of the ergonomic garments and their great potential for use and radical changes in the production of custom‐made garments for wheelchair users [36].

The 3D scanning and computer simulation approach was studied for design of functional garments for health protection of the wheelchair users [35, 37]. It was based on 58 in‐depth interviews conducted with adult wheelchair users at the national level, as described at the beginning of this chapter. The study focused on development of functional pants, adapted to individuals who suffered from pressure sores, incontinence or sweating in the crotch area, between the thighs and under the buttocks. On the other hand, we concentrated on development of special protective garments, such as sitting bag and cape, in terms of human protection from external influences, i.e., cold, wind, humidity, which could cause to immobile persons problems with the temperature of lower extremities, the chronic urinary infections and frequent colds. The chronical moistening of the skin was found to be a major problem for potential skin irritations and inflammations or even wounds due to the incontinence or sweating. Therefore, by using the antimicrobial and antioxidative textile materials (AATM) [43] in specific parts in the pants (crotch area, between the thighs, contacts of the body with a wheelchair), the prevention against potential health problems can be achieved.

the cape it was used the complex 3D model. The deformation of the cape occurred in location of the wheelchair handles and error in the fabric's collision with a sharp part of the 3D object, **Figure 14(a)**. We could not avoid it even at higher simulation resolutions. During simulations of the sitting bag, using the gravity of 0 ms−2 enabled us to develop a pattern design of suchlike

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

85

The study presented in this part of the chapter showed high usability, efficiency and benefits of 3D scanning, and virtual simulation techniques in the development process of custom‐ made functional garments and ready‐made protective garments for wheelchair users. On the other hand, we are still facing with robustness at positioning of the 2D pattern pieces

regular adapted pants to a siing posture pants paern design with AATM (dark grey) pants with AATM (grey) (a)

location of wound on a 3D body model pants paern design with AATM (dark grey) pants with AATM on hips (grey) (b)

**Figure 13.** Development of functional pants pattern designs by using 3D scanning and virtual simulation techniques [37].

(a) (b)

**Figure 14.** Virtual prototypes of special protective clothing for paraplegic wheelchair users [35].

dimensions that fits different body shapes, **Figure 14(b)**.

The development of well‐fitted functional pants on the scanned 3D body model enabled us: (a) to simulate person's morphological shape and extract main features of the body shape and (b) to simulate and validate pants pattern design with integrated AATM in exact parts of the pants on the 3D human body, **Figure 13(a)**. The pants will act as a protection regarding the chronically skin moistening. In addition, the functional pants pattern design was developed for a wheelchair user with a pressure sore on the hips. The AATM was integrated in the exact part of the pants with help of 3D scanning of this person due to exact location of the wound and virtual simulation and validation of the pants pattern design on this 3D body model, **Figure 13(b)**. The pants will act in a curative manner regarding the pressure sore on hips. This study showed new steps toward the efficient approach to responsible development of functional garments not only for wheelchair users, but also for elderly and persons who are forced to a sitting posture during the day, and could not find appropriate clothes in regular stores. In addition, 3D scanning of the immobile persons with incontinence pads, diapers, or briefs is also challenging for development of well‐fitting pants pattern designs for individuals by using the 3D virtual prototyping.

The garment's protective function can be achieved with the body heat balance in order to ensure the thermal comfort and physiological sense of safety. This can be obtained by appro‐ priate textile materials properties, clothing design, and the design of the complete clothing system and its components. Virtual prototyping approach was used to develop the sitting bag and the cape pattern designs [35]. The development of the sitting bag was performed on scanned 3D body models of the wheelchair users, whilst for the cape on the scanned 3D body model with a wheelchair. The garments fitting was carried out due to measured mechanical properties of the used fabrics, which enabled us to develop correct form of the pattern pieces. The sitting bag was designed according to the trends of newer forms of the wheelchairs, which have narrower leg supports. Requests of interviewees related to the length of the sitting bag were also considered. Therefore, it covers only the lower extremities. The respon‐ dents' request that the cape should cover the backrest and wheels of a wheelchair has been also taken into account in the cape's design. The developed virtual prototypes of the special protective clothing are shown in **Figure 14**. In this figure, we can see that for development of the cape it was used the complex 3D model. The deformation of the cape occurred in location of the wheelchair handles and error in the fabric's collision with a sharp part of the 3D object, **Figure 14(a)**. We could not avoid it even at higher simulation resolutions. During simulations of the sitting bag, using the gravity of 0 ms−2 enabled us to develop a pattern design of suchlike dimensions that fits different body shapes, **Figure 14(b)**.

The 3D scanning and computer simulation approach was studied for design of functional garments for health protection of the wheelchair users [35, 37]. It was based on 58 in‐depth interviews conducted with adult wheelchair users at the national level, as described at the beginning of this chapter. The study focused on development of functional pants, adapted to individuals who suffered from pressure sores, incontinence or sweating in the crotch area, between the thighs and under the buttocks. On the other hand, we concentrated on development of special protective garments, such as sitting bag and cape, in terms of human protection from external influences, i.e., cold, wind, humidity, which could cause to immobile persons problems with the temperature of lower extremities, the chronic urinary infections and frequent colds. The chronical moistening of the skin was found to be a major problem for potential skin irritations and inflammations or even wounds due to the incontinence or sweating. Therefore, by using the antimicrobial and antioxidative textile materials (AATM) [43] in specific parts in the pants (crotch area, between the thighs, contacts of the body with a

wheelchair), the prevention against potential health problems can be achieved.

by using the 3D virtual prototyping.

84 Computer Simulation

The development of well‐fitted functional pants on the scanned 3D body model enabled us: (a) to simulate person's morphological shape and extract main features of the body shape and (b) to simulate and validate pants pattern design with integrated AATM in exact parts of the pants on the 3D human body, **Figure 13(a)**. The pants will act as a protection regarding the chronically skin moistening. In addition, the functional pants pattern design was developed for a wheelchair user with a pressure sore on the hips. The AATM was integrated in the exact part of the pants with help of 3D scanning of this person due to exact location of the wound and virtual simulation and validation of the pants pattern design on this 3D body model, **Figure 13(b)**. The pants will act in a curative manner regarding the pressure sore on hips. This study showed new steps toward the efficient approach to responsible development of functional garments not only for wheelchair users, but also for elderly and persons who are forced to a sitting posture during the day, and could not find appropriate clothes in regular stores. In addition, 3D scanning of the immobile persons with incontinence pads, diapers, or briefs is also challenging for development of well‐fitting pants pattern designs for individuals

The garment's protective function can be achieved with the body heat balance in order to ensure the thermal comfort and physiological sense of safety. This can be obtained by appro‐ priate textile materials properties, clothing design, and the design of the complete clothing system and its components. Virtual prototyping approach was used to develop the sitting bag and the cape pattern designs [35]. The development of the sitting bag was performed on scanned 3D body models of the wheelchair users, whilst for the cape on the scanned 3D body model with a wheelchair. The garments fitting was carried out due to measured mechanical properties of the used fabrics, which enabled us to develop correct form of the pattern pieces. The sitting bag was designed according to the trends of newer forms of the wheelchairs, which have narrower leg supports. Requests of interviewees related to the length of the sitting bag were also considered. Therefore, it covers only the lower extremities. The respon‐ dents' request that the cape should cover the backrest and wheels of a wheelchair has been also taken into account in the cape's design. The developed virtual prototypes of the special protective clothing are shown in **Figure 14**. In this figure, we can see that for development of The study presented in this part of the chapter showed high usability, efficiency and benefits of 3D scanning, and virtual simulation techniques in the development process of custom‐ made functional garments and ready‐made protective garments for wheelchair users. On the other hand, we are still facing with robustness at positioning of the 2D pattern pieces

**Figure 13.** Development of functional pants pattern designs by using 3D scanning and virtual simulation techniques [37].

**Figure 14.** Virtual prototypes of special protective clothing for paraplegic wheelchair users [35].

around the sitting 3D body model due to limited possibilities of pattern pieces folding, which indicates the potential future challenge in the development of commercial 3D CAD programs for garments simulation on sitting body postures.

### **3.3. Case study 3: computer simulation of protective clothes for sport aircraft pilots**

Since the protective equipment is necessary for different areas of human activity, it is essential to take an individual approach to the design of all protective elements (clothes, gloves, shoes, helmets, etc.). During the construction and design the protection and functionality of the protective clothes is of the utmost importance, where the designer and constructor have to have the necessary knowledge about the clothing elements, the usual body movements and additional elements of protection [44].

This study focuses on development of the special protective garment for sport aircraft pilots, or so‐called anti‐g suit. This is a special form of a flight suit, worn by sport aircraft pilots, who are exposed to high levels of acceleration forces. The pilot sits in a cramped airplane cabin in exact position. Therefore, we suppose that the suit should be developed according to body dimensions in this position. Namely, it is well‐known that wearing comfort differs for variety of body shapes and dynamic body postures due to changes in body dimensions [45–48].

The present study is still at an early stage, therefore the importance of 3D body scanning and virtual simulation is shown through the development process of the suit pattern design. 3D body scanning was carried out using three scanning postures (SP), i.e., standard standing posture (SP1\_StP), sitting posture (SP2\_SiP) and driving‐sitting posture (SP3\_DSiP), **Figure 15**, by using the 3D human body scanner Vitus Smart XXL at the Textile Technology Faculty, University of Zagreb, Croatia. During scanning, especially for the SP3\_DSiP, we were restricted by the volume of the scanning area (1 × 1 m). Therefore, we were not able to scan a person in every proper sitting posture. In addition, the scanned 3D mesh was highly deformed, particularly in areas of crotch, thighs, and calves. Therefore, the modeling and reconstruction of 3D body models was difficult and lengthy process. Based on the experience gained from the study, presented in Chapter 3.2, we can assume that it would be better to use in this research a general‐purpose optical scanner, such as hand scanner Artec ™ Eva 3D.

design. The greatest chest, waist, and hips girths were obtained in a body posture SP3\_DsiP. The greatest difference in hips girth was achieved between the standing and sitting body model in a driving posture, which was also expected. Therefore, construction of the basic suit

Chest girth (cm): 99.2 / 99.0 / 101.8 Waist girth (cm): 80.0 / 81.6 / 85.6 Hips girth (cm): 100.2 / 109.8 / 119.1

SP1\_StP SP2\_SiP SP3\_DSiP

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

87

**Figure 15.** Scanned 3D body models in three different postures.

**Figure 16.** The cross‐sections of the 3D body models and body dimensions.

The virtual simulations of the suit for all body postures are shown in **Figure 17**. In this figure, a poor suit fit to the standing 3D body model can be seen, especially on the back and in the pants length. This result is understandable in terms of used body dimensions, measured in a sitting posture SP3\_DSiP. For this posture, the greatest tension (red) was achieved in the area of armhole seams and over shoulders, which specifies that measure over shoulder should be increased. Even higher tension over shoulders can be seen on both sitting postures SP2\_SiP and SP3\_DSiP, which confirms the above indication. A more appropriate suit fit to the sitting 3D body model can be seen in a driving posture SP3\_DSiP, with the exception of shoulders and hips positions, where the fit must be improved in continuation of the research. In our case, we need to construct the special protective suit, which requires very little freedom of

Based on the above, we will, using the virtual prototyping, confirm the final pattern design of

pattern design was based on measured dimensions for a posture SP3\_DsiP.

a suit, which will be upgraded with suitable elements into an anti‐g suit.

movement.

In this study, 24 body dimensions were virtually measured according to the standard ISO 8559 [42] and three additional body dimensions that standard ISO 8559 does not specify (transverse hips girth in a sitting posture, front and back overall length in a standing/sitting posture) by using the software ScanWorks.

The cross sections of 3D body models at three measurement positions of body dimensions, i.e., chest girth, waist girth, and hips girth (sitting postures—transversal hips girth), are presented in **Figure 16**, as well as belonging dimensions in standard and dynamic body pos‐ tures. In this figure, it can be seen that different cross sections and body dimensions were achieved for different body postures. The main reason for this are activated muscle groups in different body postures that should be considered when constructing the garment pattern

**Figure 15.** Scanned 3D body models in three different postures.

around the sitting 3D body model due to limited possibilities of pattern pieces folding, which indicates the potential future challenge in the development of commercial 3D CAD programs

Since the protective equipment is necessary for different areas of human activity, it is essential to take an individual approach to the design of all protective elements (clothes, gloves, shoes, helmets, etc.). During the construction and design the protection and functionality of the protective clothes is of the utmost importance, where the designer and constructor have to have the necessary knowledge about the clothing elements, the usual body movements and

This study focuses on development of the special protective garment for sport aircraft pilots, or so‐called anti‐g suit. This is a special form of a flight suit, worn by sport aircraft pilots, who are exposed to high levels of acceleration forces. The pilot sits in a cramped airplane cabin in exact position. Therefore, we suppose that the suit should be developed according to body dimensions in this position. Namely, it is well‐known that wearing comfort differs for variety of body shapes and dynamic body postures due to changes in body dimensions

The present study is still at an early stage, therefore the importance of 3D body scanning and virtual simulation is shown through the development process of the suit pattern design. 3D body scanning was carried out using three scanning postures (SP), i.e., standard standing posture (SP1\_StP), sitting posture (SP2\_SiP) and driving‐sitting posture (SP3\_DSiP), **Figure 15**, by using the 3D human body scanner Vitus Smart XXL at the Textile Technology Faculty, University of Zagreb, Croatia. During scanning, especially for the SP3\_DSiP, we were restricted by the volume of the scanning area (1 × 1 m). Therefore, we were not able to scan a person in every proper sitting posture. In addition, the scanned 3D mesh was highly deformed, particularly in areas of crotch, thighs, and calves. Therefore, the modeling and reconstruction of 3D body models was difficult and lengthy process. Based on the experience gained from the study, presented in Chapter 3.2, we can assume that it would be better to use in this research a general‐purpose optical scanner, such as hand scanner

In this study, 24 body dimensions were virtually measured according to the standard ISO 8559 [42] and three additional body dimensions that standard ISO 8559 does not specify (transverse hips girth in a sitting posture, front and back overall length in a standing/sitting

The cross sections of 3D body models at three measurement positions of body dimensions, i.e., chest girth, waist girth, and hips girth (sitting postures—transversal hips girth), are presented in **Figure 16**, as well as belonging dimensions in standard and dynamic body pos‐ tures. In this figure, it can be seen that different cross sections and body dimensions were achieved for different body postures. The main reason for this are activated muscle groups in different body postures that should be considered when constructing the garment pattern

**3.3. Case study 3: computer simulation of protective clothes for sport aircraft pilots**

for garments simulation on sitting body postures.

additional elements of protection [44].

[45–48].

86 Computer Simulation

Artec ™ Eva 3D.

posture) by using the software ScanWorks.

**Figure 16.** The cross‐sections of the 3D body models and body dimensions.

design. The greatest chest, waist, and hips girths were obtained in a body posture SP3\_DsiP. The greatest difference in hips girth was achieved between the standing and sitting body model in a driving posture, which was also expected. Therefore, construction of the basic suit pattern design was based on measured dimensions for a posture SP3\_DsiP.

The virtual simulations of the suit for all body postures are shown in **Figure 17**. In this figure, a poor suit fit to the standing 3D body model can be seen, especially on the back and in the pants length. This result is understandable in terms of used body dimensions, measured in a sitting posture SP3\_DSiP. For this posture, the greatest tension (red) was achieved in the area of armhole seams and over shoulders, which specifies that measure over shoulder should be increased. Even higher tension over shoulders can be seen on both sitting postures SP2\_SiP and SP3\_DSiP, which confirms the above indication. A more appropriate suit fit to the sitting 3D body model can be seen in a driving posture SP3\_DSiP, with the exception of shoulders and hips positions, where the fit must be improved in continuation of the research. In our case, we need to construct the special protective suit, which requires very little freedom of movement.

Based on the above, we will, using the virtual prototyping, confirm the final pattern design of a suit, which will be upgraded with suitable elements into an anti‐g suit.

of commercial 3D CAD software for simulation of garments on a wide variety of nonstandard postures of 3D body models or 3D objects. One of the future challenges would certainly be to build a parametric sitting 3D body model, which would allow the development of garment

The present part of a case study in Section 3.3 showed that we are on the right way to develop the anti‐g suit. We found out that in continuation of the study that there should be a general‐ purpose optical hand scanner, to digitize a person in proper sitting posture, which would enable the modeling of the accurate sitting 3D body model and the development of pattern

, Beti Rogina Car2

1 Institute of Engineering Materials and Design, Faculty of Mechanical Engineering, University

2 Department of Clothing Technology, Faculty of Textile Technology, University of Zagreb,

3 Institute of Structures and Design, Faculty of Mechanical Engineering, University of Maribor,

[1] Wong SK. Modeling and simulation techniques for garments. In: Hu J, editor. Computer Technology for Textiles and Apparel. 1st ed. Cambridge: Woodhead Publishing Ltd;

[2] Ng HN, Grimsdale RL. Computer graphics techniques for modeling cloth. IEEE

[3] Cloth Modeling and Simulation. In: Magnenat‐Thalmann N, editor. Modeling and Simulating Bodies and Garments,1st ed. London: Springer‐Verlag Ltd; 2010. pp. 71–138,

[4] Weil J. The synthesis of cloth objects. ACM SIGGRAPH Computer Graphics. 1986;**20**:49–54.

[5] Hinds BK, McCartney J. Interactive garment design. The Visual Computer. 1990;**6**:53–61.

Computer Graphics and Applications. 1996;**16**:28–41. DOI: 10.1109/38.536273

, Andrej Cupar3

, Zoran Stjepanovič<sup>1</sup>

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

and

89

patterns designs for a sitting population.

design of this special protective clothing.

\*, Slavica Bogović2

\*Address all correspondence to: andreja.rudolf@um.si

**Author details**

Andreja Rudolf1

Simona Jevšnik<sup>4</sup>

Zagreb, Croatia

Maribor, Slovenia

**References**

of Maribor, Maribor, Slovenia

4 Inlas, Slovenske Konjice, Slovenia

2011. pp. 173–200.

DOI 10.1007/978‐1‐84996‐263‐6

DOI: 10.1145/15886.15891

DOI:10.1007/BF01901066

**Figure 17.** Virtual prototyping of the overall on 3D body model in different postures.

## **4. Conclusions and future challenges**

Engineering approaches to textile forms' design for particular purposes, presented in this chapter, show benefits and limitations of 3D body scanning and computer simulation tech‐ niques and outline the future challenges.

The study regarding the fabric and laminate drape by using the FEM in Section 3.1 showed that draping represents a spatial problem. Therefore, for realistic modeling of draping of fused panels, it is necessary to carry out preliminary modeling of the joints with respect to the function of friction. In our case, friction depends on the normal force in the joint, as well as on relative displacement between the two layers. From experimental studies, it can be concluded that the rheological model functionally depends on the deformation state.

The case study in Section 3.2 indicated that 3D scanning and virtual simulation techniques are extremely accurate and appropriate in the development process of custom‐made func‐ tional garments and ready‐made protective clothing. However, there are still limitations in garments virtual simulations on the sitting 3D body models when using the commercial 3D CAD programs. A robust positioning of the 2D pattern pieces around the sitting 3D body model due to limited possibilities of pattern pieces folding and fabric's collision with a sharp parts of 3D objects, as in a case of a wheelchair's handle, is one of weaknesses. Therefore, future challenges in computer simulation techniques are certainly done in the development of commercial 3D CAD software for simulation of garments on a wide variety of nonstandard postures of 3D body models or 3D objects. One of the future challenges would certainly be to build a parametric sitting 3D body model, which would allow the development of garment patterns designs for a sitting population.

The present part of a case study in Section 3.3 showed that we are on the right way to develop the anti‐g suit. We found out that in continuation of the study that there should be a general‐ purpose optical hand scanner, to digitize a person in proper sitting posture, which would enable the modeling of the accurate sitting 3D body model and the development of pattern design of this special protective clothing.

## **Author details**

Andreja Rudolf1 \*, Slavica Bogović2 , Beti Rogina Car2 , Andrej Cupar3 , Zoran Stjepanovič<sup>1</sup> and Simona Jevšnik<sup>4</sup>

\*Address all correspondence to: andreja.rudolf@um.si

1 Institute of Engineering Materials and Design, Faculty of Mechanical Engineering, University of Maribor, Maribor, Slovenia

2 Department of Clothing Technology, Faculty of Textile Technology, University of Zagreb, Zagreb, Croatia

3 Institute of Structures and Design, Faculty of Mechanical Engineering, University of Maribor, Maribor, Slovenia

4 Inlas, Slovenske Konjice, Slovenia

## **References**

**4. Conclusions and future challenges**

**Figure 17.** Virtual prototyping of the overall on 3D body model in different postures.

niques and outline the future challenges.

88 Computer Simulation

Engineering approaches to textile forms' design for particular purposes, presented in this chapter, show benefits and limitations of 3D body scanning and computer simulation tech‐

(a) SP1\_StP

(b) SP2\_SiP

(c) SP3\_DSiP

The study regarding the fabric and laminate drape by using the FEM in Section 3.1 showed that draping represents a spatial problem. Therefore, for realistic modeling of draping of fused panels, it is necessary to carry out preliminary modeling of the joints with respect to the function of friction. In our case, friction depends on the normal force in the joint, as well as on relative displacement between the two layers. From experimental studies, it can be concluded

The case study in Section 3.2 indicated that 3D scanning and virtual simulation techniques are extremely accurate and appropriate in the development process of custom‐made func‐ tional garments and ready‐made protective clothing. However, there are still limitations in garments virtual simulations on the sitting 3D body models when using the commercial 3D CAD programs. A robust positioning of the 2D pattern pieces around the sitting 3D body model due to limited possibilities of pattern pieces folding and fabric's collision with a sharp parts of 3D objects, as in a case of a wheelchair's handle, is one of weaknesses. Therefore, future challenges in computer simulation techniques are certainly done in the development

that the rheological model functionally depends on the deformation state.


[6] Ng HN, Grimsdale RL. GEOFF—A Geometrical, Editor for Fold Formation. In: Lecture Notes in Computer Science Vol. 1024: Image Analysis Applications and Computer Graphics, Chin R, Ip H, Naiman A, Pong TC, editors. Berlin: Springer‐Verlag; 1995. pp. 124–131.

[20] Eischen JW, Deng S, Clapp TG. Finite element modeling and control of flexible fab‐ ric parts. Computer graphics in textiles and apparel (IEEE Computer Graphics and Applications). IEEE Computer Society Press. 1996;**16**:71–80. DOI: 10.1109/38.536277

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

91

[21] Lin H, Clifford MJ, Long C, Lee K, Guo N. A finite element approach to the modelling of fabric mechanics and its application to virtual fabric design and testing. The Journal of

[22] Chapra SC, Canale RP. Numerical Methods for Engineers. 2nd ed. New York: McGraw‐Hill;

[23] ABAQUS. Theory Manual. Version 6.2. Hibbitt, Karlsson & Sorensen, Inc. The United

[24] Prelog E. Finite Elements Method (in Slovene). University of Ljubljana, Faculty of

[25] ABAQUS. User Manual. Volume I, II, III, Version 6.2, Hibbitt, Karlsson & Sorensen, Inc.

[26] Rudomin IJ. Simulating cloth using a mixed geometry‐physical method [doctoral thesis]. Department of Computer and Information Science, University of Pensilvania; 1990.

[27] Kunii TL, Gotoda H. Modeling and Animation of Garment Wrinkle Formation processes. In: Prof. Magnenat‐Thalmann N, Thalmann D, editors. Computer Animation'90. Tokyo:

[28] Taillefer F. Mixed modeling. In Compugraphics '91. 1991. pp. 467−478. http://www.

[29] Jevšnik S. The analysis of drapability of shell fabric, interlining and fused panel as assem‐ bly parts of a garment [doctoral thesis]. Faculty of Mechanical Engineering, University

[30] Hull D, Clyne TW, editors. An Introduction to Composite Materials. 1st ed. Cambridge:

[31] Jevšnik S. The selection of fusible interlinings and a prediction of the properties of fused garment parts with knowledge system [master thesis]. Faculty of Mechanical

[32] Beer FP, Johnston ER, Jr., editors. Mechanics of Material. 1st ed. New York: McGraw;

[33] Ng SF, Hui CL, Wong LF. Development of medical garments and apparel for the elderly and the disabled. Textile Progress. 2011;**43**:235–285. DOI: 10.1080/00405167.2011.573240

[34] Rudolf A, Cupar A, Kozar T, Stjepanovič Z. Study regarding the virtual prototyping of garments for paraplegics. Fibers and Polymers. 2015;**16:**1177–1192. DOI: 10.1007/

Springer‐Verlag; 1990. pp. 131−146. DOI: 10.1007/978‐4‐431‐68296‐7\_10

worldcat.org/search?q=au%3ASanto%2C+Harold+P.&qt=hot\_autho

the Textile Institute. 2012;**103**:1063–1076. DOI: 10.1080/00405000.2012.660755

1988. 812 p.

States, Plymouth, 2000.

Architecture, Ljubljana, 1975.

of Maribor, Maribor; 2002.

1981. 616p.

s12221‐015‐1177‐4

Cambridge University Press; 1981.

Engineering, University of Maribor, Maribor, 1999.

The United States, Plymouth, 2000.


[20] Eischen JW, Deng S, Clapp TG. Finite element modeling and control of flexible fab‐ ric parts. Computer graphics in textiles and apparel (IEEE Computer Graphics and Applications). IEEE Computer Society Press. 1996;**16**:71–80. DOI: 10.1109/38.536277

[6] Ng HN, Grimsdale RL. GEOFF—A Geometrical, Editor for Fold Formation. In: Lecture Notes in Computer Science Vol. 1024: Image Analysis Applications and Computer Graphics, Chin R, Ip H, Naiman A, Pong TC, editors. Berlin: Springer‐Verlag; 1995. pp.

[7] Stumpp T, Spillmann J, Becker M, Teschner M. A Geometric Deformation Model for Stable Cloth Simulation. In: Faure F, Teschner M, editors. Proceedings of the Fifth Workshop on Virtual Reality Interactions and Physical Simulations, VRIPHYS 2008, Grenoble, France, 2008. Eurographics Association 2008, ISBN 978‐3‐905673‐70‐8 pp.

[8] Terzopoulos D, Platt J, Barr A, Fleischer K. Elastically deformable models. Computer

[9] Terzopoulos D, Fleischer K. Deformable models. In: Visual Computer. 1988;**4**:306–331.

[10] Volino P, Magnenat–Thalmann N. Virtual clothing: Theory and Practice. 1st ed. Berlin:

[11] Breen DE, House DH, Wozny MJ. A particle‐based model for simulating the draping behaviour of woven cloth. Textile Research Journal. 1994;**64**:663–685. DOI: 10.1177/

[12] Eberhardt B, Weber AG, Straßer W. A fast, flexible, particle‐system model for cloth draping. IEEE Computer Graphics and Applications. 1996;**16**:52−59. DOI: 10.1109/38.536275 [13] Baraff D, Witkin A. Large steps in cloth simulation. In: Proceedings of SIGGRAPH 98, Computer Graphics Proceedings, Annual Conference Series: ACM, ACM Press/ACM

[14] House DH, Breen DE, editors. Cloth Modeling and Animation. 1st ed. Massachusetts:

[15] Mozafary V, Payvandy P. Mass Spring parameters identification for knitted fabric simulation based on FAST testing and particle swarm optimization. Fibers and Polymers.

[16] Provot X. Deformation constraints in a mass‐spring model to describe rigid cloth behavior. In: Graphics Interface 95, May 17–19 1995; Quebec, Canada, 1995. pp. 147–154.

[17] Choi KJ, Ko HS. Stable but responsive cloth. ACM Transactions on Graphics (TOG) ‐ Proceedings of ACM SIGGRAPH 2002. 2002;**21**:604–611. DOI: 10.1145/566654.566624 [18] Selle A, Su J, Irving G, Fedkiw R. Robust high‐resolution cloth using paralelism, history‐ based collision, and accurate friction. IEEE Transactions on Visualization and Computer

[19] Hu J, Huang WQ, Yu KQ, Huang M, Li JB. Cloth simulation with a modified implicit method based on a simplified mass‐spring model. Applied Mechanics and Materials.

2013;**373–375**:1920–1926. DOI: 10.4028/www.scientific.net/AMM.373‐375.1920

Graphics (Proc. Siggraph '87). 1987;**21**:205–214. DOI: 10.1145/37401.37427

Springer‐Verlag; 2000. DOI: 10.1007/978‐3‐642‐57278‐4.

2016;**17**:1715–1725. DOI 10.1007/s12221‐016‐6567‐8

Graphics. 2009;**15**:339–350. DOI: 10.1109/TVCG.2008.79

124–131.

90 Computer Simulation

39–46

DOI: 10.1007/BF01908877

004051759406401106

SIGGRAPH. 1998. pp. 43–54.

AK Peters Natick; 2000. 360 p.


[35] Rudolf A, Drstvenšek I, Šauperl O, Brajlih T, Strnad M, Ermenc H. Research and devel‐ opment of functional garments for paraplegics: Final report on the performed project activities (in Slovene), Slovenian Ministry of Education, Science and Sport, University of Maribor, Faculty of Mechanical Engineering, 2015.

[47] Ng R, Cheung LF, Yu W. Dynamic ease allowance in arm raising of functional garments.

Textile Forms' Computer Simulation Techniques

http://dx.doi.org/10.5772/67738

93

[48] Chen Y, Zeng X, Happiette M, Bruniaux P, Ng R, Yu W. A new method of ease allow‐ ance generation for personalization of garment design. International Journal of Clothing

Science and Technology. 2008;**20**:161–173. DOI: 10.1108/09556220810865210

Sen'i Gakkaishi. 2008;**64**:236–243. DOI: 10.2115/fiber.64.236


[47] Ng R, Cheung LF, Yu W. Dynamic ease allowance in arm raising of functional garments. Sen'i Gakkaishi. 2008;**64**:236–243. DOI: 10.2115/fiber.64.236

[35] Rudolf A, Drstvenšek I, Šauperl O, Brajlih T, Strnad M, Ermenc H. Research and devel‐ opment of functional garments for paraplegics: Final report on the performed project activities (in Slovene), Slovenian Ministry of Education, Science and Sport, University of

[36] Rudolf A, Görlichová L, Kirbiš J, Repnik J, Salobir A, Selimović I, Drstvenšek I. New technologies in the development of ergonomic garments for wheelchair users in a virtual

[37] Rudolf A, Stjepanovič Z, Cupar A. Designing of functional garments for people with physical disabilities and postural disorders by using 3D scanning, CASP and computer

[38] Chang WM, Zhao YX, Guo RP, Wang Q, Gu XD. Design and study of clothing struc‐ ture for people with limb disabilities. Journal of Fiber Bioengineering and Informatics.

[39] Šavrin R. Patient with chronic spinal cord lesion. In: Šavrin R (ed) Pozni zapleti pri bol‐ nikih z okvaro hrbtenjače (in Slovene) (Late Complications of Patients with Spinal Cord Impairment). Ljubljana: University Rehabilitation Institute Republic of Slovenia‐Soča.

[40] Rudolf A, Kozar T, Jevšnik S, Cupar A, Drstvenšek I, Stjepanović Z. Research on 3D body model in a sitting position obtained with different 3D scanners. In: Proceedings of The International Istanbul Textile Congress 2013, Istanbul, Turkey, May–June 2013, pp.

[41] Rudolf A, Cupar A, Kozar T, Jevšnik S, Stjepanovič Z. Development of a suitable 3D body model in a sitting position. In: Proceedings of 5th International Scientific‐Professional Conference Textile Science and Economy (TNP 2013), Zrenjanin: Technical Faculty

[42] ISO 8559:1989 Garment construction and anthropometric surveys—body dimensions,

[43] Fras Zemljič L, Peršin Z, Šauperl O, Rudolf A, Kostić M. Medical textiles based on vis‐ cose rayon fabrics coated with chitosan‐encapsulated iodine: Antibacterial and antioxi‐

[44] Bogović S, Hursa Šajatović A. Construction of protective clothing. In: Young Scientist in the Protective Textiles Research, University of Zagreb, Faculty of Textile Technology,

[45] Shan Y, Huang G, Qian X. Research overview on apparel fit. In: Luo J, editor. Soft Computing in Information Communication Technology. AISC 161. Berlin, Heidelberg:

[46] Gill S. Improving garment fit and function through ease quantification. Journal of Fashion Marketing and Management. 2011;**15**:228–241. DOI: 10.1108/13612021111132654

Spring‐Verlag; 2012. pp. 39–44. DOI: 10.1007/978‐3‐642‐29452‐5

Maribor, Faculty of Mechanical Engineering, 2015.

simulation techniques, 2017 (paper in review)

2009;**2**:61–66. DOI: 10.3993/jfbi06200910

2015. pp. 7–17.

"Mihajlo Pupin", pp. 51–56.

dant properties, 2017 (paper in review)

FP7‐Report, Zagreb. 2011. pp. 309–331.

263–269

92 Computer Simulation

1989.

environment. Industria Textila, 2017;**2** (paper in press).

[48] Chen Y, Zeng X, Happiette M, Bruniaux P, Ng R, Yu W. A new method of ease allow‐ ance generation for personalization of garment design. International Journal of Clothing Science and Technology. 2008;**20**:161–173. DOI: 10.1108/09556220810865210

**Chapter 5**

**Computer Simulation of Bioprocess**

Jianqun Lin, Ling Gao, Huibin Lin, Yilin Ren,

Additional information is available at the end of the chapter

Bioprocess optimization is important in order to make the bioproduction process more efficient and economic. The conventional optimization methods are costly and less efficient. On the other hand, modeling and computer simulation can reveal the mechanisms behind the phenomenon to some extent, to assist the deep analysis and efficient optimization of bioprocesses. In this chapter, modeling and computer simulation of microbial growth and metabolism kinetics, bioreactor dynamics, bioreactor feedback control will be made to show the application methods and the usefulness of modeling and computer simulation methods in optimization of the bioprocess technology.

Keywords: modeling, simulation, bioprocess, fermentation, bioreactor, control

Bioindustry is important in utilization of reproducible resources, developments of environmental friendly production processes, and sustainable economy. In order to make the bioprocesses more efficient and economic, bioprocess optimization and automatic control are needed. The conventional optimization methods cost much labor, time, and money; on the other hand, modeling and computer simulation method can reveal the mechanisms behind the phenomenon to some extent, to assist the deep analysis and optimization of bioprocesses. The modeling and computer simulation method is much efficient and economic, and widely used

Bioprocess efficiency depends on the cell capability, bioreactor performances, and the optimal control of the cultivation conditions. The metabolic network inside the cells involves thousands of enzymes, and the enzyme expression and activities are dynamically affected by the cultivation conditions. As a result, the cultivation condition affects the cell growth, metabolism,

> © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Yutian Lin and Jianqiang Lin

http://dx.doi.org/10.5772/67732

Abstract

1. Introduction

in research and modern bioindustries.

## **Computer Simulation of Bioprocess**

Jianqun Lin, Ling Gao, Huibin Lin, Yilin Ren, Yutian Lin and Jianqiang Lin

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/67732

#### Abstract

Bioprocess optimization is important in order to make the bioproduction process more efficient and economic. The conventional optimization methods are costly and less efficient. On the other hand, modeling and computer simulation can reveal the mechanisms behind the phenomenon to some extent, to assist the deep analysis and efficient optimization of bioprocesses. In this chapter, modeling and computer simulation of microbial growth and metabolism kinetics, bioreactor dynamics, bioreactor feedback control will be made to show the application methods and the usefulness of modeling and computer simulation methods in optimization of the bioprocess technology.

Keywords: modeling, simulation, bioprocess, fermentation, bioreactor, control

## 1. Introduction

Bioindustry is important in utilization of reproducible resources, developments of environmental friendly production processes, and sustainable economy. In order to make the bioprocesses more efficient and economic, bioprocess optimization and automatic control are needed. The conventional optimization methods cost much labor, time, and money; on the other hand, modeling and computer simulation method can reveal the mechanisms behind the phenomenon to some extent, to assist the deep analysis and optimization of bioprocesses. The modeling and computer simulation method is much efficient and economic, and widely used in research and modern bioindustries.

Bioprocess efficiency depends on the cell capability, bioreactor performances, and the optimal control of the cultivation conditions. The metabolic network inside the cells involves thousands of enzymes, and the enzyme expression and activities are dynamically affected by the cultivation conditions. As a result, the cultivation condition affects the cell growth, metabolism,

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

and product production in a sophisticated and nonlinear way. Control and maintain relatively optimal cultivation conditions through proper operation and control of the bioreactor are needed to improve the production efficiency of the bioprocess.

Bioprocess mathematical modeling involves the modeling of the dynamic changes of the metabolic rates and their distribution inside the cells with the changes of time and cultivation conditions, the modeling of the dynamic changes of the reaction rates and mass transfer rates as well as the cultivation conditions inside the bioreactor, and the modeling of the dynamics of the bioreactor control system etc., based on which optimizations of the bioreactor operation and control strategies can be made and the results can be predicted and evaluated by computer simulation. In this chapter, examples of modeling and computer simulation of microbial growth and metabolism kinetics, bioreactor dynamics, and the feedback control of the bioreactor are given to show the application methods and the usefulness of modeling and computer simulation methods in bioprocess technology.

## 2. Modeling of microbial cell growth and metabolism

#### 2.1. Modeling of microbial cell growth

Cell growth is one of the most important variables to be investigated in bioprocess. The cell growth is usually described by the specific growth rate, μ, and the time course of cell concentration, X. The specific growth rate is defined by the increase in grams of cells (g) per gram dry cells (g) per hour (h), and can be modeled by Eq. (1)

$$
\mu = \frac{1}{X} \cdot \frac{dX}{dt} \tag{1}
$$

enzymes needed for cell metabolism, etc., and the specific growth rate is zero or at a low value resulting in the lag growth phase. One way to model the lag growth phase is to separate the newly inoculated cells as active cells, X, and inactive cells, Y, and the time for Y to turn into X

<sup>μ</sup>ðÞ¼ <sup>t</sup> <sup>μ</sup><sup>m</sup> � <sup>X</sup>

where a1, a2, m1, and m<sup>2</sup> are the constants of the Pearson distribution. The other methods used to predict the lag growth phase simply defined the lag time in terms of cell growth, tL [2, 3]. One way to deal with the lag growth phase is to relate it with the changes of μ defined by Eq. (8) [3].

km <sup>þ</sup> <sup>S</sup> � ð<sup>1</sup> � <sup>e</sup>

After the lag growth phase, the specific growth rate increases gradually and the cells go into

X ¼ X<sup>0</sup> � e

where X0 is the initial cell concentration. After the exponential growth phase, the specific growth rate decreases gradually to zero, because of nutrients limitation, accumulation of intracellular toxic intermediates, accumulation of inhibitors in the culture broth, etc., and the net cell growth tends to cease to enter into the stationary growth phase. In order to model the decreased cell growth rate and the stationary growth phase, the Logistic growth model was developed [4], in which μ decreases with the increase of cell concentration, X, and μ reaches

dt <sup>¼</sup> <sup>μ</sup><sup>m</sup> � <sup>1</sup> � <sup>X</sup>

From above analysis, it can be seen that the specific growth rate will start from zero or a low value in the lag growth phase, increases gradually and reaches the maximum value in the exponential growth phase, and then decreases gradually in the declined growth phase, which makes the time course of the specific growth rate the "bell" type curve and the time course of cell concentration the typical "S" type curve (Figure 1), which cannot be well fitted by the

Xm � �

dX

<sup>1</sup> � <sup>t</sup> a2 � �<sup>m</sup><sup>2</sup>

dt <sup>¼</sup> <sup>μ</sup>ð Þ� <sup>t</sup> <sup>X</sup> <sup>ð</sup>4<sup>Þ</sup>

Y ¼ X<sup>0</sup> � X ð6Þ

dt <sup>¼</sup> <sup>μ</sup> � <sup>X</sup> <sup>ð</sup>9<sup>Þ</sup>

<sup>μ</sup>�<sup>t</sup> <sup>ð</sup>10<sup>Þ</sup>

� X ð11Þ

<sup>X</sup> <sup>þ</sup> <sup>Y</sup> <sup>ð</sup>5<sup>Þ</sup>

exp <sup>μ</sup><sup>t</sup> � �dt <sup>ð</sup>7<sup>Þ</sup>

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732 97

�t=tL Þ ð8<sup>Þ</sup>

dX

conforms to Pearson distribution expressed by Eqs. (4)–(7) [2].

x tðÞ¼ <sup>ð</sup><sup>t</sup>

�a<sup>1</sup>

the exponential growth phase, which is expressed by Eqs. (9) and (10)

zero when x reaches its maximum value, Xm, shown by Eq. (11)

dX

c 1 þ

<sup>μ</sup>ðS, tÞ ¼ <sup>μ</sup><sup>m</sup> � <sup>S</sup>

t a1 � �<sup>m</sup><sup>1</sup>

The specific growth rate is related with many process variables, like temperature (T), pH, dissolved oxygen (DO) concentration (CL), substrate concentration (S), product concentration (P), X, and time (t). expressed by

$$\mu = f(T, pH, DO, S, P, X, t, \dots) \tag{2}$$

In real applications, only the key process variable(s) are included in Eq. (2) for simplification. Monod equation [1] using the substrate concentration as the single independent variable is shown by Eq. (3), as T, pH, and in many cases CL are controlled constant and can be neglected from the equation.

$$
\mu = \frac{\mu\_m \cdot S}{k\_m + S} \tag{3}
$$

where μm, is the maximum specific growth rate and km, is the substrate affinity coefficient.

The typical cell growth curve is of "S" type, which has a lag growth phase and cannot be properly modeled by Monod equation as discussed later. At the initial cultivation stage, the cells need some time to adapt to the new environmental conditions for induction of some new enzymes needed for cell metabolism, etc., and the specific growth rate is zero or at a low value resulting in the lag growth phase. One way to model the lag growth phase is to separate the newly inoculated cells as active cells, X, and inactive cells, Y, and the time for Y to turn into X conforms to Pearson distribution expressed by Eqs. (4)–(7) [2].

and product production in a sophisticated and nonlinear way. Control and maintain relatively optimal cultivation conditions through proper operation and control of the bioreactor are

Bioprocess mathematical modeling involves the modeling of the dynamic changes of the metabolic rates and their distribution inside the cells with the changes of time and cultivation conditions, the modeling of the dynamic changes of the reaction rates and mass transfer rates as well as the cultivation conditions inside the bioreactor, and the modeling of the dynamics of the bioreactor control system etc., based on which optimizations of the bioreactor operation and control strategies can be made and the results can be predicted and evaluated by computer simulation. In this chapter, examples of modeling and computer simulation of microbial growth and metabolism kinetics, bioreactor dynamics, and the feedback control of the bioreactor are given to show the application methods and the usefulness of modeling and computer

Cell growth is one of the most important variables to be investigated in bioprocess. The cell growth is usually described by the specific growth rate, μ, and the time course of cell concentration, X. The specific growth rate is defined by the increase in grams of cells (g) per gram dry

> <sup>μ</sup> <sup>¼</sup> <sup>1</sup> X � dX

The specific growth rate is related with many process variables, like temperature (T), pH, dissolved oxygen (DO) concentration (CL), substrate concentration (S), product concentration

In real applications, only the key process variable(s) are included in Eq. (2) for simplification. Monod equation [1] using the substrate concentration as the single independent variable is shown by Eq. (3), as T, pH, and in many cases CL are controlled constant and can be neglected

<sup>μ</sup> <sup>¼</sup> <sup>μ</sup><sup>m</sup> � <sup>S</sup>

The typical cell growth curve is of "S" type, which has a lag growth phase and cannot be properly modeled by Monod equation as discussed later. At the initial cultivation stage, the cells need some time to adapt to the new environmental conditions for induction of some new

where μm, is the maximum specific growth rate and km, is the substrate affinity coefficient.

dt <sup>ð</sup>1<sup>Þ</sup>

km <sup>þ</sup> <sup>S</sup> <sup>ð</sup>3<sup>Þ</sup>

μ ¼ f T, pH, DO, S, P, X, t, ðÞ ð … 2Þ

needed to improve the production efficiency of the bioprocess.

2. Modeling of microbial cell growth and metabolism

simulation methods in bioprocess technology.

96 Computer Simulation

2.1. Modeling of microbial cell growth

(P), X, and time (t). expressed by

from the equation.

cells (g) per hour (h), and can be modeled by Eq. (1)

$$\frac{dX}{dt} = \mu(t) \cdot X \tag{4}$$

$$
\mu(t) = \frac{\mu\_m \cdot X}{X + Y} \tag{5}
$$

$$Y = X\_0 - X \tag{6}$$

$$\mathbf{x}(t) = \int\_{-a\_1}^{t} c \left(1 + \frac{t}{a\_1}\right)^{m\_1} \left(1 - \frac{t}{a\_2}\right)^{m\_2} \exp\left(\mu t\right) dt \tag{7}$$

where a1, a2, m1, and m<sup>2</sup> are the constants of the Pearson distribution. The other methods used to predict the lag growth phase simply defined the lag time in terms of cell growth, tL [2, 3]. One way to deal with the lag growth phase is to relate it with the changes of μ defined by Eq. (8) [3].

$$
\mu(\mathbf{S}, t) = \frac{\mu\_m \cdot \mathbf{S}}{k\_m + \mathbf{S}} \cdot (\mathbf{1} - e^{-t/t\_L}) \tag{8}
$$

After the lag growth phase, the specific growth rate increases gradually and the cells go into the exponential growth phase, which is expressed by Eqs. (9) and (10)

$$\frac{dX}{dt} = \mu \cdot X \tag{9}$$

$$X = X\_0 \cdot e^{\mu \cdot t} \tag{10}$$

where X0 is the initial cell concentration. After the exponential growth phase, the specific growth rate decreases gradually to zero, because of nutrients limitation, accumulation of intracellular toxic intermediates, accumulation of inhibitors in the culture broth, etc., and the net cell growth tends to cease to enter into the stationary growth phase. In order to model the decreased cell growth rate and the stationary growth phase, the Logistic growth model was developed [4], in which μ decreases with the increase of cell concentration, X, and μ reaches zero when x reaches its maximum value, Xm, shown by Eq. (11)

$$\frac{dX}{dt} = \mu\_m \cdot \left(1 - \frac{X}{X\_m}\right) \cdot X \tag{11}$$

From above analysis, it can be seen that the specific growth rate will start from zero or a low value in the lag growth phase, increases gradually and reaches the maximum value in the exponential growth phase, and then decreases gradually in the declined growth phase, which makes the time course of the specific growth rate the "bell" type curve and the time course of cell concentration the typical "S" type curve (Figure 1), which cannot be well fitted by the

Figure 1. Graphically illustration of model parameters [5]. I is the lag growth phase; II is the increased growth phase; III is the exponential growth phase; IV is the decreased growth phase; V is the stationary growth phase. kin is the maximum increasing rate of μ; kde is the maximum decreasing rate of μ; tin is the time point when dμ/dt equals kin; tin is the time point when dμ/dt equals kde; and tL is the lag time.

models discussed above. In order to simulate the "bell" type specific growth rate curve and the "S" type cell growth curve more accurately, the following model is developed shown by Eqs. (12) and (13) [5]

$$
\mu(t) = \mu\_m \cdot \frac{1}{1 + e^{-k\_{\rm in}(t - t\_{\rm in})}} \cdot \frac{1}{1 + e^{k\_{\rm dc}(t - t\_{\rm dc})}} \tag{12}
$$

$$\frac{dX}{dt} = \mu(t) \cdot X \tag{13}$$

where kin is the maximum increasing rate of μ; kde is the maximum decreasing rate of μ; tin is the time point when the increasing rate of μ equals kin; tde is the time point when the decreasing rate of μ equals kde. All the parameters used in the model can be obtained graphically (Figure 1). One example of this model application is shown in Figure 2. In order to make wider application of above model, Eq. (12) can be combined with Monod model to develop Eqs. (12) and (13)

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732 99

into Eqs. (14) and (15)

Figure 2. Simulation of cell growth of Trichoderma reesei [5].

where kin is the maximum increasing rate of μ; kde is the maximum decreasing rate of μ; tin is the time point when the increasing rate of μ equals kin; tde is the time point when the decreasing rate of μ equals kde. All the parameters used in the model can be obtained graphically (Figure 1). One example of this model application is shown in Figure 2. In order to make wider application of above model, Eq. (12) can be combined with Monod model to develop Eqs. (12) and (13) into Eqs. (14) and (15)

Figure 2. Simulation of cell growth of Trichoderma reesei [5].

models discussed above. In order to simulate the "bell" type specific growth rate curve and the "S" type cell growth curve more accurately, the following model is developed shown by

Figure 1. Graphically illustration of model parameters [5]. I is the lag growth phase; II is the increased growth phase; III is the exponential growth phase; IV is the decreased growth phase; V is the stationary growth phase. kin is the maximum increasing rate of μ; kde is the maximum decreasing rate of μ; tin is the time point when dμ/dt equals kin; tin is the time point

<sup>1</sup> <sup>þ</sup> <sup>e</sup>�kinðt�tin<sup>Þ</sup> � <sup>1</sup>

<sup>1</sup> <sup>þ</sup> ekdeðt�tde<sup>Þ</sup> <sup>ð</sup>12<sup>Þ</sup>

dt <sup>¼</sup> <sup>μ</sup>ðtÞ � <sup>X</sup> <sup>ð</sup>13<sup>Þ</sup>

<sup>μ</sup>ðtÞ ¼ <sup>μ</sup><sup>m</sup> � <sup>1</sup>

dX

Eqs. (12) and (13) [5]

98 Computer Simulation

when dμ/dt equals kde; and tL is the lag time.

$$\mu(\mathcal{S}, t) = \frac{\mu\_m \cdot \mathcal{S}}{k\_m + \mathcal{S}} \cdot \frac{1}{1 + e^{-k\_{\rm in}(t - t\_{\rm in})}} \cdot \frac{1}{1 + e^{k\_{\rm de}(t - t\_{\rm de})}} \tag{14}$$

$$\frac{dX}{dt} = \mu(\mathbf{S}, t) \cdot X \tag{15}$$

involve the intracellular structure or the nonhomogeneity of the cells, respectively, are generally sophisticated and contain uneasily measurable model parameters, and are usually used

In many cases, the ratio of cell mass produced per substrate utilized is a constant and defined

YX=<sup>S</sup> ¼ � <sup>Δ</sup><sup>X</sup>

The minus sign in Eq. (21) is to ensure YX/S to be positive as ΔS is negative. From Eq. (21), it can be seen that substrate consumption is proportional to the cell growth, so that substrate con-

Further, the total substrate consumed can be considered of two parts, with one part for real cell

where YG is the maximum cell yield when μ tends to μm; ms the maintenance coefficient. From Eq. (23), Eq. (24) can be obtained showing the positive relationships between the specific

> <sup>¼</sup> ms μ þ 1 YG

From Eq. (24), it can be seen that in order to increase the cell yield, YX/S, a high value of μ

The specific substrate consumption rate is defined by the consumption in grams of substrate

¼ μ YG

qS <sup>¼</sup> <sup>1</sup> X � �dS

qS <sup>¼</sup> <sup>μ</sup> YX=<sup>S</sup> <sup>¼</sup> <sup>μ</sup> YG

<sup>Δ</sup><sup>S</sup> <sup>ð</sup>21<sup>Þ</sup>

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732

<sup>þ</sup> ms � <sup>X</sup> <sup>ð</sup>23<sup>Þ</sup>

dt <sup>ð</sup>25<sup>Þ</sup>

þ ms ð26Þ

ð22Þ

101

ð24Þ

2.2. Modeling of microbial substrate uptake and product production

� dS dt <sup>¼</sup> <sup>1</sup> YX=<sup>S</sup> � dX dt <sup>¼</sup> <sup>μ</sup><sup>X</sup> YX=<sup>S</sup>

growth and the other part for life maintenance to develop Eq. (22) into Eq. (23)

1 YX=<sup>S</sup>

(g) per gram dry cells (g) per hour (h), and can be modeled by Eq. (25)

Then, Eq. (22) or (23) can be expressed in a simple way by Eq. (27)

as the cell yield from the substrate, YX/S, shown by Eq. (21)

� dS dt <sup>¼</sup> <sup>1</sup> YX=<sup>S</sup> � dX dt <sup>¼</sup> <sup>μ</sup><sup>X</sup> YX=<sup>S</sup>

sumption can be simply modeled by Eq. (22)

growth rate, μ, and the cell yield, YX/S.

From Eqs. (23) and (25), Eq. (26) can be obtained

should be maintained.

for theoretical purposes.

Even if Monod model has some limitations, it is still the widely used growth model in real applications for the major reasons of simplification and the single independent variable of substrate concentration, which is the key process variable to be investigated in many fermentation processes. In cases of high density fermentation, or substrate or product inhibition, modifications of Monod model are needed. Contois model shown by Eq. (16) is an example for high density fermentation, in which modeling the cell concentration is included in the denominator of the specific growth rate equation to show the limitation effect of high cell concentration on the growth, to make the specific growth rate to be reciprocal to the cell concentration (μ ∝ X�<sup>1</sup> ) at very low substrate concentration.

$$
\mu = \frac{\mu\_m \cdot \mathbf{S}}{k\_m \cdot \mathbf{X} + \mathbf{S}} \tag{16}
$$

In some cases, the substrates which have inhibitory effect on cell growth, like ethanol or acetate, etc., are used. One example of the growth model under noncompetitive substrate inhibition with KI >> Km is shown by Eq. (17)

$$\mu = \frac{\mu m \cdot \mathcal{S}}{K\_m + \mathcal{S} + \frac{\mathcal{S}^2}{Kl}} \tag{17}$$

One example for modeling product inhibition, like ethanol or lactic acid fermentation, is shown by Eq. (18)

$$
\mu = \frac{\mu m \cdot \mathcal{S}}{(k\_{\mathcal{S}} + \mathcal{S})} \cdot \left(1 - \frac{P}{P\_m}\right)^n \tag{18}
$$

In case of dual substrates, the growth model in form of the sum or product of two Monod type terms is often used for the substitutable and nonsubstitutable substrates, respectively. For example, glucose and glycerol are substitutable substrates which can be modeled by Eq. (19), while glucose and oxygen are nonsubstitutable substrates which can be modeled by Eq. (20).

$$
\mu = \mu m \cdot \left( \alpha\_1 \cdot \frac{S\_1}{K\_{m1} + S\_1} + \alpha\_2 \cdot \frac{S\_2}{K\_{m2} + S\_2} \right) \tag{19}
$$

$$
\mu = \mu m \cdot \frac{\mathcal{S}\_1}{\mathcal{K}\_{m1} + \mathcal{S}\_1} \cdot \frac{\mathcal{S}\_2}{\mathcal{K}\_{m2} + \mathcal{S}\_2} \tag{20}
$$

Above growth models are relatively simple, which are unstructured and unsegregated models, and are useful for practical applications. Structured and segregated growth models, which involve the intracellular structure or the nonhomogeneity of the cells, respectively, are generally sophisticated and contain uneasily measurable model parameters, and are usually used for theoretical purposes.

#### 2.2. Modeling of microbial substrate uptake and product production

<sup>μ</sup>ðS, tÞ ¼ <sup>μ</sup><sup>m</sup> � <sup>S</sup>

concentration (μ ∝ X�<sup>1</sup>

100 Computer Simulation

shown by Eq. (18)

inhibition with KI >> Km is shown by Eq. (17)

km <sup>þ</sup> <sup>S</sup> � <sup>1</sup>

dX

) at very low substrate concentration.

<sup>μ</sup> <sup>¼</sup> <sup>μ</sup><sup>m</sup> � <sup>S</sup>

In some cases, the substrates which have inhibitory effect on cell growth, like ethanol or acetate, etc., are used. One example of the growth model under noncompetitive substrate

> <sup>μ</sup> <sup>¼</sup> <sup>μ</sup><sup>m</sup> � <sup>S</sup> Km <sup>þ</sup> <sup>S</sup> <sup>þ</sup> <sup>S</sup><sup>2</sup>

One example for modeling product inhibition, like ethanol or lactic acid fermentation, is

ð Þ kS <sup>þ</sup> <sup>S</sup> � <sup>1</sup> � <sup>P</sup>

In case of dual substrates, the growth model in form of the sum or product of two Monod type terms is often used for the substitutable and nonsubstitutable substrates, respectively. For example, glucose and glycerol are substitutable substrates which can be modeled by Eq. (19), while glucose and oxygen are nonsubstitutable substrates which can be modeled by Eq. (20).

Km<sup>1</sup> þ S<sup>1</sup>

Km<sup>1</sup> þ S<sup>1</sup>

Above growth models are relatively simple, which are unstructured and unsegregated models, and are useful for practical applications. Structured and segregated growth models, which

<sup>μ</sup> <sup>¼</sup> <sup>μ</sup><sup>m</sup> � <sup>S</sup>

<sup>μ</sup> <sup>¼</sup> <sup>μ</sup><sup>m</sup> � <sup>α</sup><sup>1</sup> � <sup>S</sup><sup>1</sup>

<sup>μ</sup> <sup>¼</sup> <sup>μ</sup><sup>m</sup> � <sup>S</sup><sup>1</sup>

KI

Pm <sup>n</sup>

<sup>þ</sup> <sup>α</sup><sup>2</sup> � <sup>S</sup><sup>2</sup>

� <sup>S</sup><sup>2</sup> Km<sup>2</sup> þ S<sup>2</sup>

Km<sup>2</sup> þ S<sup>2</sup>

Even if Monod model has some limitations, it is still the widely used growth model in real applications for the major reasons of simplification and the single independent variable of substrate concentration, which is the key process variable to be investigated in many fermentation processes. In cases of high density fermentation, or substrate or product inhibition, modifications of Monod model are needed. Contois model shown by Eq. (16) is an example for high density fermentation, in which modeling the cell concentration is included in the denominator of the specific growth rate equation to show the limitation effect of high cell concentration on the growth, to make the specific growth rate to be reciprocal to the cell

<sup>1</sup> <sup>þ</sup> <sup>e</sup>�kinðt�tin<sup>Þ</sup> � <sup>1</sup>

<sup>1</sup> <sup>þ</sup> ekdeðt�tde<sup>Þ</sup> <sup>ð</sup>14<sup>Þ</sup>

dt <sup>¼</sup> <sup>μ</sup>ðS, tÞ � <sup>X</sup> <sup>ð</sup>15<sup>Þ</sup>

km � <sup>X</sup> <sup>þ</sup> <sup>S</sup> <sup>ð</sup>16<sup>Þ</sup>

ð17Þ

ð18Þ

ð19Þ

ð20Þ

In many cases, the ratio of cell mass produced per substrate utilized is a constant and defined as the cell yield from the substrate, YX/S, shown by Eq. (21)

$$Y\_{X/S} = -\frac{\Delta X}{\Delta S} \tag{21}$$

The minus sign in Eq. (21) is to ensure YX/S to be positive as ΔS is negative. From Eq. (21), it can be seen that substrate consumption is proportional to the cell growth, so that substrate consumption can be simply modeled by Eq. (22)

$$-\frac{dS}{dt} = \frac{1}{Y\_{X/S}} \cdot \frac{dX}{dt} = \frac{\mu X}{Y\_{X/S}}\tag{22}$$

Further, the total substrate consumed can be considered of two parts, with one part for real cell growth and the other part for life maintenance to develop Eq. (22) into Eq. (23)

$$-\frac{dS}{dt} = \frac{1}{Y\_{X/S}} \cdot \frac{dX}{dt} = \frac{\mu X}{Y\_{X/S}} = \left(\frac{\mu}{Y\_G} + m\mathbf{s}\right) \cdot X \tag{23}$$

where YG is the maximum cell yield when μ tends to μm; ms the maintenance coefficient. From Eq. (23), Eq. (24) can be obtained showing the positive relationships between the specific growth rate, μ, and the cell yield, YX/S.

$$\frac{1}{Y\_{X/S}} = \frac{ms}{\mu} + \frac{1}{Y\_G} \tag{24}$$

From Eq. (24), it can be seen that in order to increase the cell yield, YX/S, a high value of μ should be maintained.

The specific substrate consumption rate is defined by the consumption in grams of substrate (g) per gram dry cells (g) per hour (h), and can be modeled by Eq. (25)

$$q\_S = \frac{1}{X} \cdot \frac{-dS}{dt} \tag{25}$$

From Eqs. (23) and (25), Eq. (26) can be obtained

$$q\_S = \frac{\mu}{Y\_{X/S}} = \frac{\mu}{Y\_G} + ms \tag{26}$$

Then, Eq. (22) or (23) can be expressed in a simple way by Eq. (27)

$$-\frac{dS}{dt} = q\_S \cdot X \tag{27}$$

d VX ð Þ

d VP ð Þ

dt <sup>¼</sup> FSf � <sup>1</sup>

dV

where F, the substrate feeding rate. Eqs. (33)–(35) can be transformed into Eqs. (37)–(39)

dt <sup>¼</sup> <sup>μ</sup><sup>X</sup> � <sup>F</sup>

dt <sup>¼</sup> qpX � <sup>F</sup>

In fed-batch culture, F can be continuous, for example, to be constant, linear increase, exponential increase with time, or uncontinuous, for example, operated in a repeated pulse-fed mode. Fed-batch culture has many advantages over batch culture. It has higher substrate conversion yield, extends production phase and can eliminate substrate inhibition or Crabtree

Continuous culture is another kind of bioreactor operation method, with which method substrate is continuously fed into the bioreactor meanwhile the culture broth is continuously taken out of the bioreactor at the same rate so that the liquid volume remains unchanged. Continuous culture has the advantage of high production efficiency but the disadvantages of low substrate conversion yield, strain deterioration, and easy contamination, and is not often used in industry. As a result, examples of only batch and fed-batch cultures are investigated in

Bioethanol is produced by anaerobic fermentation using Saccharomyces cerevisiae, which can grow anaerobically through fermentative pathway (glycolysis) catabolizing 1 mole of glucose and producing 2 moles of ethanol and 2 moles of ATP. S. cerevisiae can also grow aerobically through tricarboxylic acid (TCA) cycle catabolizing 1 mole of glucose producing 6 moles of

4. Modeling and simulation of control of fermentation processes

4.1. Effects of early pulse aeration on ethanol fermentation

Sf � <sup>S</sup> � <sup>1</sup>

V

V

YX=<sup>S</sup>

dX

dP

� dS dt <sup>¼</sup> <sup>F</sup> V

effects, etc., and is widely used in industry.

next section.

4.1.1. Mathematical modeling

YX=<sup>S</sup>

� d VS ð Þ

dt <sup>¼</sup> <sup>μ</sup>VX <sup>ð</sup>33<sup>Þ</sup>

dt <sup>¼</sup> qp � VX <sup>ð</sup>35<sup>Þ</sup>

dt <sup>¼</sup> <sup>F</sup> <sup>ð</sup>36<sup>Þ</sup>

<sup>X</sup> <sup>ð</sup>37<sup>Þ</sup>

<sup>P</sup> <sup>ð</sup>39<sup>Þ</sup>

μX ð38Þ

μVX ð34Þ

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732 103

For the metabolism of facultative anaerobes grown in oxygen limited condition, the cell yield varies greatly depending on the degree of oxygen limitation. Catabolism of 1 mole of glucose can produce 36 (or 38) mole ATP under aerobic condition or produce 2 mole ATP under anaerobic condition. The ATP-based cell yield, YATP, can be regarded a constant of 10 g drycell/mol ATP. So, the cell yield of YX/S under anaerobic condition will only be 1/18 (or 1/19) of that under aerobic condition. The partition of the carbon source between aerobic pathway and anaerobic pathway in the catabolism is determined by oxygen supply or degree of oxygen limitation. Examples of cell growth under oxygen limited condition will be given in Sections 4.2 and 4.3.

In modeling of specific product production rate, Luedeking-Piret equation is most often used for its simplification and usefulness, which relates the specific product production rate to the growth related and nongrowth related parts by using α and β terms, respectively, as described by Eq. (28). The total product production is described by Eq. (29).

$$
\mathfrak{q}\_p = \alpha \cdot \mu + \beta \tag{28}
$$

$$\frac{dP}{dt} = q\_p \cdot X \tag{29}$$

#### 3. Modeling of bioreactor with different operation methods

Continuous stirred tank reactor (CSTR) is the most popular type of bioreactor, which can be operated in batch, fed-batch, and continuous modes. For batch culture, no substrate is fed into the bioreactor except air for aeration or acid or base for pH control, and no culture broth is taken out of the bioreactor during the fermentation process. For modeling of a typical batch culture, the specific rates of cell growth (μ), substrate uptake (qS), and product production (qP) introduced in Section 2, and the mass balance equations of Eqs. (30)–(32) can be used.

$$\frac{dX}{dt} = \mu \cdot X \tag{30}$$

$$-\frac{dS}{dt} = q\_S \cdot X \tag{31}$$

$$\frac{dP}{dt} = q\_p \cdot X \tag{32}$$

For fed-batch culture, substrate is fed into the bioreactor but no culture broth is taken out during the fermentation process, so that the liquid volume is increasing. For modeling fedbatch culture, V is variant and the mass balance equations of Eqs. (33)–(36) can be used. The specific rates of μ, qS, and qP introduced in Section 2 can be used in the modeling.

$$\frac{d(VX)}{dt} = \mu VX \tag{33}$$

$$-\frac{d(VS)}{dt} = FS\_f - \frac{1}{Y\_{X/S}}\mu VX\tag{34}$$

$$\frac{d(VP)}{dt} = q\_p \cdot VX \tag{35}$$

$$\frac{dV}{dt} = F\tag{36}$$

where F, the substrate feeding rate. Eqs. (33)–(35) can be transformed into Eqs. (37)–(39)

$$\frac{dX}{dt} = \mu X - \left(\frac{F}{V}\right)X\tag{37}$$

$$-\frac{d\mathcal{S}}{dt} = \left(\frac{F}{V}\right) \left(\mathcal{S}\_f - \mathcal{S}\right) - \frac{1}{Y\_{X/S}} \mu X \tag{38}$$

$$\frac{dP}{dt} = q\_p X - \left(\frac{F}{V}\right)P\tag{39}$$

In fed-batch culture, F can be continuous, for example, to be constant, linear increase, exponential increase with time, or uncontinuous, for example, operated in a repeated pulse-fed mode. Fed-batch culture has many advantages over batch culture. It has higher substrate conversion yield, extends production phase and can eliminate substrate inhibition or Crabtree effects, etc., and is widely used in industry.

Continuous culture is another kind of bioreactor operation method, with which method substrate is continuously fed into the bioreactor meanwhile the culture broth is continuously taken out of the bioreactor at the same rate so that the liquid volume remains unchanged. Continuous culture has the advantage of high production efficiency but the disadvantages of low substrate conversion yield, strain deterioration, and easy contamination, and is not often used in industry. As a result, examples of only batch and fed-batch cultures are investigated in next section.

#### 4. Modeling and simulation of control of fermentation processes

#### 4.1. Effects of early pulse aeration on ethanol fermentation

#### 4.1.1. Mathematical modeling

� dS

4.2 and 4.3.

102 Computer Simulation

be used.

For the metabolism of facultative anaerobes grown in oxygen limited condition, the cell yield varies greatly depending on the degree of oxygen limitation. Catabolism of 1 mole of glucose can produce 36 (or 38) mole ATP under aerobic condition or produce 2 mole ATP under anaerobic condition. The ATP-based cell yield, YATP, can be regarded a constant of 10 g drycell/mol ATP. So, the cell yield of YX/S under anaerobic condition will only be 1/18 (or 1/19) of that under aerobic condition. The partition of the carbon source between aerobic pathway and anaerobic pathway in the catabolism is determined by oxygen supply or degree of oxygen limitation. Examples of cell growth under oxygen limited condition will be given in Sections

In modeling of specific product production rate, Luedeking-Piret equation is most often used for its simplification and usefulness, which relates the specific product production rate to the growth related and nongrowth related parts by using α and β terms, respectively, as described

Continuous stirred tank reactor (CSTR) is the most popular type of bioreactor, which can be operated in batch, fed-batch, and continuous modes. For batch culture, no substrate is fed into the bioreactor except air for aeration or acid or base for pH control, and no culture broth is taken out of the bioreactor during the fermentation process. For modeling of a typical batch culture, the specific rates of cell growth (μ), substrate uptake (qS), and product production (qP) introduced in Section 2, and the mass balance equations of Eqs. (30)–(32) can

dP

dX

� dS

dP

specific rates of μ, qS, and qP introduced in Section 2 can be used in the modeling.

For fed-batch culture, substrate is fed into the bioreactor but no culture broth is taken out during the fermentation process, so that the liquid volume is increasing. For modeling fedbatch culture, V is variant and the mass balance equations of Eqs. (33)–(36) can be used. The

3. Modeling of bioreactor with different operation methods

by Eq. (28). The total product production is described by Eq. (29).

dt <sup>¼</sup> qS � <sup>X</sup> <sup>ð</sup>27<sup>Þ</sup>

qP ¼ α � μ þ β ð28Þ

dt <sup>¼</sup> qP � <sup>X</sup> <sup>ð</sup>29<sup>Þ</sup>

dt <sup>¼</sup> <sup>μ</sup> � <sup>X</sup> <sup>ð</sup>30<sup>Þ</sup>

dt <sup>¼</sup> qS � <sup>X</sup> <sup>ð</sup>31<sup>Þ</sup>

dt <sup>¼</sup> qp � <sup>X</sup> <sup>ð</sup>32<sup>Þ</sup>

Bioethanol is produced by anaerobic fermentation using Saccharomyces cerevisiae, which can grow anaerobically through fermentative pathway (glycolysis) catabolizing 1 mole of glucose and producing 2 moles of ethanol and 2 moles of ATP. S. cerevisiae can also grow aerobically through tricarboxylic acid (TCA) cycle catabolizing 1 mole of glucose producing 6 moles of CO2 and 38 moles of ATP. The cell yield from ATP, YATP, is relatively constant, which is about 10 g dry-cell mass/mole ATP. S. cerevisiae will grow much faster aerobically than anaerobically for the reason to have more ATP used for cell growth.

Fermentation period, which can be roughly divided into growth phase and production phase, is one major factor affecting the production cost. Fermentation period will be shortened if the cell growth phase is shortened. By employing an aerobic condition during the cell growth phase to fasten the cell growth and an anaerobic condition during the ethanol production phase, the fermentation period should be shortened while the ethanol production remained. The growth phase aerobic pulse stimulated ethanol fermentation and the normal anaerobic ethanol fermentation operated in batch mode are investigated and compared by modeling and simulation [6].

The specific glucose consumption rate (qS) subject to substrate and product inhibition effects is modeled by Eq. (40). In Eq. (41), Q is the on-off switch between anaerobic (Q ¼ 1) and aerobic (Q ¼ 1) conditions (Q is not Q). Eq. (42) describes the ATP production from glucose under anaerobic or aerobic condition. The cell growth is based on the net ATP for cell synthesis shown by Eq. (43). Under aerobic condition, 6 moles of O2 are needed for oxidizing 1 mole of glucose shown by Eq. (44). Ethanol is produced during the anaerobic production phase shown by Eq. (45).

$$q\_S = \frac{q\_{S,\text{max}} \times S}{k\_S + S + S^2/k\_{iS}} \times \left(1 - \frac{P}{P\_{\text{crit}}}\right)^a \tag{40}$$

dP

dOUR

Simulation was made using above mathematical model (Figure 3). The references for the parameter values used in the model or in calculation of the parameter values used in the model are shown in Table 1. In aerobic condition, qS.mas of 1 g/g/h and mS of 0.1 mole/g/h, respectively, are used, which will be decreased and increased, respectively, compared with anaerobic

The simulation results of the conventional anaerobic ethanol fermentation and the early growth phase aerobic pulse stimulated ethanol fermentation processes are shown in Figure 3. In simulation of the early growth phase aerobic pulse stimulated ethanol fermentation process, Q was set to zero (Q ¼ 1) for the first 3 h. The qATP and μ were abruptly decreased and the q<sup>S</sup> and qEtOH were abruptly increased with the shift of Q from 0 to 1 (Figure 3B). The results showed that early stage aerobic pulse stimulated ethanol fermentation and had the advantage in shortening the fermentation period for more than 10 h compared with the conventional

In this section, glucose feeding control based on dissolved oxygen (DO) will be investigated by using a fed-batch fermentation process using Escherichia coli, which is often used as the host for recombinant protein production. This control strategy, which relates glucose concentration with DO changes [10, 11], is practical as DO sensor is widely used in fermentation technology. In the fermentation process, oxygen is continuously transferred into the liquid phase from the gas phase at a certain oxygen transfer rate (OTR) under aeration and agitation conditions; meanwhile, oxygen is continuously consumed by microbes at a certain oxygen uptake rate (OUR). After the cell reach high concentration, oxygen limitation occurs when OUR becomes larger than OTR so that DO decreases to nearly zero. On the other hand, E. coli catabolizes glucose aerobically through tricarboxylic acid (TCA) cycle, catabolizing one molecule of glucose into six molecules of CO2 consuming six molecules of O2. When glucose is depleted, O2 consumption stops (OUR ¼ 0) while OTR is positive so that DO rises suddenly. So, the sudden DO rise can be the indicator for glucose depletion and used as the signal for glucose feeding. After glucose feeding, glucose consumption and oxygen uptake resume and DO drops again. Then, the control system will monitor the next sudden DO rise for glucose feeding control, which strategy can maintain glucose concentration in low level and is called DO stat control

4.2. Fermentation with substrate feeding using DO stat control strategy

strategy. This control strategy has been analyzed by modeling method [12].

where OUR is oxygen uptake rate.

anaerobic ethanol fermentation.

4.2.1. Mathematical modeling

4.1.2. Simulation results

condition.

dt <sup>¼</sup> qP � <sup>X</sup> <sup>ð</sup>48<sup>Þ</sup>

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732 105

dt <sup>¼</sup> qO<sup>2</sup> � <sup>X</sup> <sup>ð</sup>49<sup>Þ</sup>

$$Q = \begin{cases} 0 & \text{aeroicic condition } (\overline{Q} = 1) \\ 1 & \text{anaeorbic condition } (\overline{Q} = 0) \end{cases} \tag{41}$$

$$\eta\_{\rm ATP} = \mathcal{Q} \times \frac{q\_{\mathcal{S}}}{M\_{\rm GUT}} \times 2 + \bar{\mathcal{Q}} \times \frac{q\_{\mathcal{S}}}{M\_{\rm GUT}} \times 38 \tag{42}$$

$$
\mu = \left(\eta\_{\rm ATP} - m\_{\rm S.ATP}\right) \times Y\_{\rm ATP} \tag{43}
$$

$$q\_{O2} = \overline{Q} \times q\_S \times \frac{M\_{O\_2}}{M\_{\text{Gluc}}} \times 6 \tag{44}$$

$$q\_P = Q \times q\_S \times \frac{M\_{\rm EOH}}{M\_{\rm GInc}} \times 2 \tag{45}$$

where S and P are the glucose and product (ethanol) concentrations, respectively; Pcri is the critical value of ethanol concentration for inhibition of glucose consumption; q<sup>S</sup> and qS.max are the specific glucose consumption rate and its maximum value, respectively; ks, kis, and α are the constants; qATP, qP, qO2, and μ are the specific rates of ATP and ethanol productions, oxygen consumption, and the specific growth rate; mS.ATP is the ATP consumption constant for cell maintenance; YATP is the cell yield from ATP. The mass balance equations are shown by Eqs. (46)–(49)

$$\frac{dX}{dt} = \mu \times \left(1 - \frac{X}{X\_{\text{max}}}\right) \times X \tag{46}$$

$$-\frac{dS}{dt} = q\_S \times X \tag{47}$$

$$\frac{dP}{dt} = q\_p \times X \tag{48}$$

$$\frac{d\text{OUR}}{dt} = q\_{O2} \times X \tag{49}$$

where OUR is oxygen uptake rate.

### 4.1.2. Simulation results

CO2 and 38 moles of ATP. The cell yield from ATP, YATP, is relatively constant, which is about 10 g dry-cell mass/mole ATP. S. cerevisiae will grow much faster aerobically than anaerobically

Fermentation period, which can be roughly divided into growth phase and production phase, is one major factor affecting the production cost. Fermentation period will be shortened if the cell growth phase is shortened. By employing an aerobic condition during the cell growth phase to fasten the cell growth and an anaerobic condition during the ethanol production phase, the fermentation period should be shortened while the ethanol production remained. The growth phase aerobic pulse stimulated ethanol fermentation and the normal anaerobic ethanol fermentation operated in batch mode are investigated and compared by modeling and simulation [6].

The specific glucose consumption rate (qS) subject to substrate and product inhibition effects is modeled by Eq. (40). In Eq. (41), Q is the on-off switch between anaerobic (Q ¼ 1) and aerobic (Q ¼ 1) conditions (Q is not Q). Eq. (42) describes the ATP production from glucose under anaerobic or aerobic condition. The cell growth is based on the net ATP for cell synthesis shown by Eq. (43). Under aerobic condition, 6 moles of O2 are needed for oxidizing 1 mole of glucose shown by Eq. (44). Ethanol is produced during the anaerobic production phase shown by Eq. (45).

=kiS

1 anaerobic condition ðQ ¼ 0Þ

� 2 þ Q \_ � qS MGluc

> MO<sup>2</sup> MGluc

MEtOH MGluc

Xmax 

<sup>Q</sup> <sup>¼</sup> 0 aerobic condition <sup>ð</sup><sup>Q</sup> <sup>¼</sup> <sup>1</sup><sup>Þ</sup>

MGluc

μ ¼ qATP � m<sup>S</sup>:ATP

qO<sup>2</sup> ¼ Q � qS �

qP ¼ Q � qS �

the cell yield from ATP. The mass balance equations are shown by Eqs. (46)–(49)

dt <sup>¼</sup> <sup>μ</sup> � <sup>1</sup> � <sup>X</sup>

� dS

dX

where S and P are the glucose and product (ethanol) concentrations, respectively; Pcri is the critical value of ethanol concentration for inhibition of glucose consumption; q<sup>S</sup> and qS.max are the specific glucose consumption rate and its maximum value, respectively; ks, kis, and α are the constants; qATP, qP, qO2, and μ are the specific rates of ATP and ethanol productions, oxygen consumption, and the specific growth rate; mS.ATP is the ATP consumption constant for cell maintenance; YATP is

� <sup>1</sup> � <sup>P</sup>

Pcri <sup>α</sup>

� <sup>Y</sup>ATP <sup>ð</sup>43<sup>Þ</sup>

ð40Þ

ð41Þ

� 38 ð42Þ

� 6 ð44Þ

� 2 ð45Þ

� X ð46Þ

dt <sup>¼</sup> qS � <sup>X</sup> <sup>ð</sup>47<sup>Þ</sup>

qS <sup>¼</sup> qS:max � <sup>S</sup> kS <sup>þ</sup> <sup>S</sup> <sup>þ</sup> <sup>S</sup><sup>2</sup>

<sup>q</sup>ATP <sup>¼</sup> <sup>Q</sup> � qS

for the reason to have more ATP used for cell growth.

104 Computer Simulation

Simulation was made using above mathematical model (Figure 3). The references for the parameter values used in the model or in calculation of the parameter values used in the model are shown in Table 1. In aerobic condition, qS.mas of 1 g/g/h and mS of 0.1 mole/g/h, respectively, are used, which will be decreased and increased, respectively, compared with anaerobic condition.

The simulation results of the conventional anaerobic ethanol fermentation and the early growth phase aerobic pulse stimulated ethanol fermentation processes are shown in Figure 3. In simulation of the early growth phase aerobic pulse stimulated ethanol fermentation process, Q was set to zero (Q ¼ 1) for the first 3 h. The qATP and μ were abruptly decreased and the q<sup>S</sup> and qEtOH were abruptly increased with the shift of Q from 0 to 1 (Figure 3B). The results showed that early stage aerobic pulse stimulated ethanol fermentation and had the advantage in shortening the fermentation period for more than 10 h compared with the conventional anaerobic ethanol fermentation.

#### 4.2. Fermentation with substrate feeding using DO stat control strategy

#### 4.2.1. Mathematical modeling

In this section, glucose feeding control based on dissolved oxygen (DO) will be investigated by using a fed-batch fermentation process using Escherichia coli, which is often used as the host for recombinant protein production. This control strategy, which relates glucose concentration with DO changes [10, 11], is practical as DO sensor is widely used in fermentation technology. In the fermentation process, oxygen is continuously transferred into the liquid phase from the gas phase at a certain oxygen transfer rate (OTR) under aeration and agitation conditions; meanwhile, oxygen is continuously consumed by microbes at a certain oxygen uptake rate (OUR). After the cell reach high concentration, oxygen limitation occurs when OUR becomes larger than OTR so that DO decreases to nearly zero. On the other hand, E. coli catabolizes glucose aerobically through tricarboxylic acid (TCA) cycle, catabolizing one molecule of glucose into six molecules of CO2 consuming six molecules of O2. When glucose is depleted, O2 consumption stops (OUR ¼ 0) while OTR is positive so that DO rises suddenly. So, the sudden DO rise can be the indicator for glucose depletion and used as the signal for glucose feeding. After glucose feeding, glucose consumption and oxygen uptake resume and DO drops again. Then, the control system will monitor the next sudden DO rise for glucose feeding control, which strategy can maintain glucose concentration in low level and is called DO stat control strategy. This control strategy has been analyzed by modeling method [12].

The specific growth rate is modeled by Logistic equation shown by Eq. (50). The specific glucose consumption rate is modeled to include two parts, one for the net growth and the other for the maintenance shown by Eq. (51). The mole specific oxygen consumption rate is six times of the specific glucose consumption rate as shown by Eq. (52). The specific product production rate is modeled by Luedeking-Piret [Eq. (53)]. OUR and OTR are shown by

qS.mas (anaerobic) 3 g/g/h qS.mas ¼ μmax/YX/S

Parameter Value Reference

kS 0.213 g/L [7] kiS 386.64 g/L [7] Pcri 226 g/L [7] μmax (anaerobic) 0.45 L/h [8] YX/S (anaerobic) 0.15 g/g [8]

Α 1.5 [8] mS 0.01 mole/g/h [9] Xmax (anaerobic) 11 g/L [7] YATP 10 g/mol [9]

km <sup>þ</sup> <sup>S</sup> � <sup>1</sup> � <sup>X</sup>

MGluc

Xm 

� mS ð51Þ

qP ¼ α � μ þ β ð53Þ

OUR ¼ qO<sup>2</sup> � X ð54Þ

OTR ¼ kLa � C� ð Þ � CL ð55Þ

� 6 ð52Þ

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732 107

<sup>X</sup> <sup>ð</sup>56<sup>Þ</sup>

ð50Þ

<sup>μ</sup> <sup>¼</sup> <sup>μ</sup><sup>m</sup> � <sup>S</sup>

qS <sup>¼</sup> <sup>μ</sup> YG

qO<sup>2</sup> <sup>¼</sup> qS � MO<sup>2</sup>

where qO2 and qP are the specific rates of O2 consumption and product production, respectively; α, β, are the constants for Luedeking-Piret equation; kLa, is the volume oxygen transfer rate; CL and C\* are the dissolved oxygen concentration and its saturated value, respectively;

The mass balance equations for fed-batch culture can be made and transformed into Eqs. (56)–(60).

V

dt <sup>¼</sup> <sup>μ</sup><sup>X</sup> � <sup>F</sup>

MO2 and MGluc are the molecular weights of O2 and of glucose, respectively.

dX

Eqs. (54) and (55), respectively.

Table 1. Parameter values and references.

Figure 3. Simulation of ethanol fermentation. (A) Normal anaerobic fermentation. (B) Early 3 h aerobic pulse followed by anaerobic fermentation.


Table 1. Parameter values and references.

Figure 3. Simulation of ethanol fermentation. (A) Normal anaerobic fermentation. (B) Early 3 h aerobic pulse followed by

anaerobic fermentation.

106 Computer Simulation

The specific growth rate is modeled by Logistic equation shown by Eq. (50). The specific glucose consumption rate is modeled to include two parts, one for the net growth and the other for the maintenance shown by Eq. (51). The mole specific oxygen consumption rate is six times of the specific glucose consumption rate as shown by Eq. (52). The specific product production rate is modeled by Luedeking-Piret [Eq. (53)]. OUR and OTR are shown by Eqs. (54) and (55), respectively.

$$
\mu = \frac{\mu\_m \cdot S}{k\_m + S} \cdot \left(1 - \frac{X}{X\_m}\right) \tag{50}
$$

$$
\eta\_S = \frac{\mu}{Y\_G} - m\_S \tag{51}
$$

$$
\eta\_{O\_2} = \eta\_S \cdot \frac{M\_{O\_2}}{M\_{\text{Gluc}}} \times 6 \tag{52}
$$

$$
\mathfrak{q}\_P = \alpha \cdot \mu + \beta \tag{53}
$$

$$\mathbf{OUR} = q\_{\mathbf{O\_2}} \cdot \mathbf{X} \tag{54}$$

$$\text{OTR} = k\_L a \cdot (\text{C}^\* - \text{C}\_L) \tag{55}$$

where qO2 and qP are the specific rates of O2 consumption and product production, respectively; α, β, are the constants for Luedeking-Piret equation; kLa, is the volume oxygen transfer rate; CL and C\* are the dissolved oxygen concentration and its saturated value, respectively; MO2 and MGluc are the molecular weights of O2 and of glucose, respectively.

The mass balance equations for fed-batch culture can be made and transformed into Eqs. (56)–(60).

$$\frac{dX}{dt} = \mu X - \left(\frac{F}{V}\right)X\tag{56}$$

$$-\frac{d\mathbf{S}}{dt} = \left(\frac{F}{V}\right) \cdot \left(\mathbf{S}\_f - \mathbf{S}\right) - q\_\mathbf{S} \cdot \mathbf{X} \tag{57}$$

$$\frac{dP}{dt} = q\_p \cdot X - \left(\frac{F}{V}\right) \cdot P \tag{58}$$

$$\frac{d\mathbb{C}\_L}{dt} = \text{OTR} - \text{OUR} - \left(\frac{F}{V}\right) \cdot \mathbb{C}\_L \tag{59}$$

$$\frac{dV}{dt} = F\tag{60}$$

Figure 4. Computer simulation of process variables of DO stat fed-batch culture.

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732 109

where P is the product concentration; Sf is the substrate concentration in the feeding solution; V is the volume of the culture broth; F is the feeding rate.

#### 4.2.2. Simulation results

In the simulation, glucose pulse feeding was made when DO increased over 10% in order to avoid noise interruptions. In each pulse feeding, a dosage equivalent to 20 g/L increase in glucose concentration was fed. The initial glucose was depleted at about 75 h and the product concentration was a little over 6 g/L at that time. By glucose pulse feeding, the product concentration was more than doubled (Figure 4). Glucose and DO concentrations go up and down in turn and fluctuate during the control period. By using this control strategy, glucose concentration can be maintained in an averaged low concentration, which is desired and helps to overcome the glucose effects and increase the product yields. In addition, the DO stat control strategy does not need the extra sensor and is easily applied.

#### 4.3. Fermentation with DO feedforward-feedback control and substrate-feedback control

#### 4.3.1. Mathematical modeling

DO control is important in fermentation process. The level of DO can affect the metabolic flux distribution and the product yield and production efficiency. As oxygen has low solubility in water, DO control is a hard task for fermentation process. Compared with feedback control, DO feedforward-feedback (FF-FB) control has the advantage in dealing with the time-varying characteristics resulted from the cell growth during the fermentation process. The oxygen consumption of the microbial cells is considered the disturbance to the control system and is estimated by using the mathematical model and compensated by the FF control action. The substrate is FB controlled by repeated pulse-fed of carbon source. The schematic diagram for the control system is shown in Figure 5 [13].

The specific cell growth rate is modeled using double substrate Monod equation shown by Eq. (61). The equations for the specific glucose consumption rate, the specific oxygen consumption rate, OUR, and OTR are shown by Eqs. (62)–(65), which are the same as that of Section 4.2.1.

Figure 4. Computer simulation of process variables of DO stat fed-batch culture.

� dS dt <sup>¼</sup> <sup>F</sup> V 

dCL

control strategy does not need the extra sensor and is easily applied.

is the volume of the culture broth; F is the feeding rate.

4.2.2. Simulation results

108 Computer Simulation

4.3.1. Mathematical modeling

Section 4.2.1.

the control system is shown in Figure 5 [13].

dP

dt <sup>¼</sup> qp � <sup>X</sup> � <sup>F</sup>

dt <sup>¼</sup> OTR � OUR � <sup>F</sup>

dV

where P is the product concentration; Sf is the substrate concentration in the feeding solution; V

In the simulation, glucose pulse feeding was made when DO increased over 10% in order to avoid noise interruptions. In each pulse feeding, a dosage equivalent to 20 g/L increase in glucose concentration was fed. The initial glucose was depleted at about 75 h and the product concentration was a little over 6 g/L at that time. By glucose pulse feeding, the product concentration was more than doubled (Figure 4). Glucose and DO concentrations go up and down in turn and fluctuate during the control period. By using this control strategy, glucose concentration can be maintained in an averaged low concentration, which is desired and helps to overcome the glucose effects and increase the product yields. In addition, the DO stat

4.3. Fermentation with DO feedforward-feedback control and substrate-feedback control

DO control is important in fermentation process. The level of DO can affect the metabolic flux distribution and the product yield and production efficiency. As oxygen has low solubility in water, DO control is a hard task for fermentation process. Compared with feedback control, DO feedforward-feedback (FF-FB) control has the advantage in dealing with the time-varying characteristics resulted from the cell growth during the fermentation process. The oxygen consumption of the microbial cells is considered the disturbance to the control system and is estimated by using the mathematical model and compensated by the FF control action. The substrate is FB controlled by repeated pulse-fed of carbon source. The schematic diagram for

The specific cell growth rate is modeled using double substrate Monod equation shown by Eq. (61). The equations for the specific glucose consumption rate, the specific oxygen consumption rate, OUR, and OTR are shown by Eqs. (62)–(65), which are the same as that of

V 

> V

� Sf � <sup>S</sup> � qS � <sup>X</sup> <sup>ð</sup>57<sup>Þ</sup>

dt <sup>¼</sup> <sup>F</sup> <sup>ð</sup>60<sup>Þ</sup>

� P ð58Þ

� CL ð59Þ

<sup>μ</sup> <sup>¼</sup> <sup>μ</sup><sup>m</sup> � <sup>S</sup>

qS <sup>¼</sup> <sup>μ</sup> YG

qO<sup>2</sup> <sup>¼</sup> qS � MO<sup>2</sup>

OTR ¼ kLa � CL

dX

dt ¼ �qS � <sup>X</sup> <sup>þ</sup>

dS

dCL

the cell respiration according to Eqs. (65) and (68), and dCL/dt ¼ 0.

and (73), respectively, which are drawn from Eqs. (70) and (71).

can be neglected in Eqs. (66)–(68).

oxygen transfer driving force, ΔC ¼ (CL

The mass balance equations for the repeated fed-batch culture are described by Eqs. (66–69).

dt <sup>¼</sup> <sup>μ</sup> � <sup>X</sup> � <sup>F</sup>

dt <sup>¼</sup> OTR � OUR � <sup>F</sup>

dV

where SF is the substrate concentration in the concentrated feeding solution. The substrate feeding solution is concentrated so that the volume change resulted from the substrate feeding

For FF control of DO, in order to compensate the DO disturbance resulted from the cell growth, OTR should be equal to OUR according to Eq. (68) if the dilution effect of the feeding is neglected so as to ensure CL unchanged (dCL/dt ¼ 0) and remained at the set-point. As the

the set-point, kLa should be controlled to meet Eq. (70) to compensate the time-varying OUR by

OUR C� <sup>L</sup> � CL

The value of kLa is controlled by agitation speed (N) and aeration rate (G) shown by Eq. (71).

Between the two manipulated variables, N is more effective than G in controlling kLa [14]. Therefore, 70% of the control effort is assigned to N and 30% is assigned to G by using Eqs. (72)

F

km <sup>þ</sup> <sup>S</sup> � CL

MGluc

kO<sup>2</sup> þ CL

� mS ð62Þ

� ð Þ � CL ð65Þ

<sup>V</sup> � <sup>X</sup> <sup>ð</sup>66<sup>Þ</sup>

<sup>V</sup> � ð Þ SF � <sup>S</sup> <sup>ð</sup>67<sup>Þ</sup>

dt <sup>¼</sup> <sup>F</sup> <sup>ð</sup>69<sup>Þ</sup>

\* � CL), is relatively constant when CL is maintained at

<sup>¼</sup> kLa <sup>ð</sup>70<sup>Þ</sup>

kLa <sup>¼</sup> <sup>k</sup> � <sup>N</sup><sup>3</sup> � <sup>G</sup><sup>0</sup>:<sup>5</sup> <sup>ð</sup>71<sup>Þ</sup>

<sup>V</sup> � CL <sup>ð</sup>68<sup>Þ</sup>

OUR ¼ qO<sup>2</sup> � X ð64Þ

� 6 ð63Þ

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732

ð61Þ

111

Figure 5. The schematic diagram of the bioreactor control system. (A) Bioreactor: F, substrate feeding rate; N, agitation speed; G, aeration rate. (B) DO FF-FB control. (C) Substrate FB control.

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732 111

$$
\mu = \frac{\mu\_m \cdot \mathbf{S}}{k\_m + \mathbf{S}} \cdot \frac{\mathbf{C}\_L}{k\_{O\_2} + \mathbf{C}\_L} \tag{61}
$$

$$
\eta\_S = \frac{\mu}{Y\_G} - m\_S \tag{62}
$$

$$
\eta\_{O\_2} = \eta\_S \cdot \frac{M\_{O\_2}}{M\_{\text{Gluc}}} \times 6 \tag{63}
$$

$$\mathbf{U}\mathbf{U}\mathbf{R} = q\_{\mathrm{O\_2}} \cdot X \tag{64}$$

$$\text{OTR} = k\_L a \cdot (\mathbb{C}\_L{}^\* - \mathbb{C}\_L) \tag{65}$$

The mass balance equations for the repeated fed-batch culture are described by Eqs. (66–69).

$$\frac{dX}{dt} = \mu \cdot X - \frac{F}{V} \cdot X \tag{66}$$

$$\frac{d\mathbf{S}}{dt} = -q\_{\mathbf{S}} \cdot \mathbf{X} + \frac{\mathbf{F}}{V} \cdot (\mathbf{S}\_{\mathbf{F}} - \mathbf{S})\tag{67}$$

$$\frac{d\mathbf{C}\_L}{dt} = \mathbf{OTR} - \mathbf{OUR} - \frac{F}{V} \cdot \mathbf{C}\_L \tag{68}$$

$$\frac{dV}{dt} = F\tag{69}$$

where SF is the substrate concentration in the concentrated feeding solution. The substrate feeding solution is concentrated so that the volume change resulted from the substrate feeding can be neglected in Eqs. (66)–(68).

For FF control of DO, in order to compensate the DO disturbance resulted from the cell growth, OTR should be equal to OUR according to Eq. (68) if the dilution effect of the feeding is neglected so as to ensure CL unchanged (dCL/dt ¼ 0) and remained at the set-point. As the oxygen transfer driving force, ΔC ¼ (CL \* � CL), is relatively constant when CL is maintained at the set-point, kLa should be controlled to meet Eq. (70) to compensate the time-varying OUR by the cell respiration according to Eqs. (65) and (68), and dCL/dt ¼ 0.

$$\frac{\text{OUR}}{\left(\mathbb{C}\_{L}^{\*} - \mathbb{C}\_{L}\right)} = k\_{L}a \tag{70}$$

The value of kLa is controlled by agitation speed (N) and aeration rate (G) shown by Eq. (71).

$$k\_{\rm L}a = k \cdot \text{N}^3 \cdot \text{G}^{0.5} \tag{71}$$

Between the two manipulated variables, N is more effective than G in controlling kLa [14]. Therefore, 70% of the control effort is assigned to N and 30% is assigned to G by using Eqs. (72) and (73), respectively, which are drawn from Eqs. (70) and (71).

Figure 5. The schematic diagram of the bioreactor control system. (A) Bioreactor: F, substrate feeding rate; N, agitation

speed; G, aeration rate. (B) DO FF-FB control. (C) Substrate FB control.

110 Computer Simulation

$$N\_{\rm FF\times t} = \left(\frac{\rm OUR}{\left(C\_L^\* - C\_L\right)} \cdot \frac{1}{k \cdot G\_{t-1} \, ^{0.5}}\right)^{\frac{1}{5}} \cdot 70\% \tag{72}$$

$$\mathbf{G}\_{\rm FF \times t} = \left(\frac{\mathbf{OUR}}{(\mathbf{C}\_L \, ^\circ - \mathbf{C}\_L)} \cdot \frac{1}{k \cdot \mathbf{N}\_{t-1} \, ^\circ}\right)^2 \cdot 30\% \tag{73}$$

where the subscripts t andt�1 are the current and last time points, respectively. So,Nand G control actions should be finished in several control rounds. Eqs. (72) and (73) are used in the FF control.

As model predictions may not be very accurate, FB control is used to eliminate the control error and ensure the control accuracy. In the case of FB control, the error between DO set-point and process variable is calculated by Eq. (74).

$$
\sigma = \mathbf{C}\_{L\text{sp}} - \mathbf{C}\_{L\text{s}} \tag{74}
$$

where CL.sp is DO set-point. The proportional and integration (PI) control strategy is used for FB control by using Eqs. (75) and (76) for N and G control, respectively. Similarly, 70% of the control action is assigned to NFB and 30% is assigned to GFB.

$$N\_{\rm FB\cdot t} = \left(k\_{\rm P\cdot N} \cdot e + k\_{\rm I\cdot N} \left\{ e \right\} \cdot 70\% \tag{75}$$

$$\mathbf{G\_{FB}}\_{\mathbf{t}} = \left(k\_{\mathrm{P-G}} \cdot \boldsymbol{\varepsilon} + k\_{\mathrm{I-G}} \left[\boldsymbol{\varepsilon}\right] \right) \cdot \mathbf{30\%} \tag{76}$$

Then, the total DO control actions of N and G are shown by Eqs. (77) and (78)

$$N\_t = N\_0 + N\_{\text{FF}\cdot t} + N\_{\text{FB}\cdot t} \tag{77}$$

$$\mathbf{G}\_t = \mathbf{G}\_0 + \mathbf{G}\_{\text{FF}\cdot t} + \mathbf{G}\_{\text{FB}\cdot t} \tag{78}$$

Figure 6. Simulation of DO FF-FB control and substrate FB control with prediction error and noise. The model pre-

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732 113

dictions with 5% randomized noises and 20% over estimate of the cell growth in FF control.

where N<sup>0</sup> and G<sup>0</sup> are the initial values of N and G, respectively.

#### 4.3.2. Simulation results

In this system, DO is FF-FB controlled by agitation speed and aeration rate and the substrate concentration is FB controlled by repeated pulse-fed of certain amount of the concentrated feeding solution tomake the substrate concentration reach 30 g/L when the substrate concentration is lower than the set-point of 5 g/L. In order to confirm the robustness of the control system under model prediction errors and noises, 5% randomized noises and 20% over estimate of the cell growth were added in the mathematical model predictions in FF control. Then, simulations were made with the above noises and prediction errors. The results indicated that even if the noises and relatively large model prediction errors existed, the control system still had good performance. The reason is that FB control finally compensated the inaccuracy of the FF control, as shown in Figure 6 [13].

<sup>N</sup>FF�<sup>t</sup> <sup>¼</sup> OUR C� <sup>L</sup> � CL � � � <sup>1</sup>

<sup>G</sup>FF�<sup>t</sup> <sup>¼</sup> OUR CL � ð Þ � CL

and process variable is calculated by Eq. (74).

112 Computer Simulation

control action is assigned to NFB and 30% is assigned to GFB.

where N<sup>0</sup> and G<sup>0</sup> are the initial values of N and G, respectively.

4.3.2. Simulation results

k � Gt�<sup>1</sup> 0:5

� <sup>1</sup> k � Nt�<sup>1</sup> 3 3

e ¼ CL:s<sup>p</sup> � CL ð74Þ

Nt ¼ N<sup>0</sup> þ NFF�<sup>t</sup> þ NFB�<sup>t</sup> ð77Þ

Gt ¼ G<sup>0</sup> þ GFF�<sup>t</sup> þ GFB�<sup>t</sup> ð78Þ

� 70% ð72Þ

� 30% ð73Þ

� 70% ð75Þ

� 30% ð76Þ

!<sup>1</sup>

� �<sup>2</sup>

where the subscripts t andt�1 are the current and last time points, respectively. So,Nand G control actions should be finished in several control rounds. Eqs. (72) and (73) are used in the FF control. As model predictions may not be very accurate, FB control is used to eliminate the control error and ensure the control accuracy. In the case of FB control, the error between DO set-point

where CL.sp is DO set-point. The proportional and integration (PI) control strategy is used for FB control by using Eqs. (75) and (76) for N and G control, respectively. Similarly, 70% of the

� �

� �

In this system, DO is FF-FB controlled by agitation speed and aeration rate and the substrate concentration is FB controlled by repeated pulse-fed of certain amount of the concentrated feeding solution tomake the substrate concentration reach 30 g/L when the substrate concentration is lower than the set-point of 5 g/L. In order to confirm the robustness of the control system under model prediction errors and noises, 5% randomized noises and 20% over estimate of the cell growth were added in the mathematical model predictions in FF control. Then, simulations were made with the above noises and prediction errors. The results indicated that even if the noises and relatively large model prediction errors existed, the control system still had good performance. The reason is that

FB control finally compensated the inaccuracy of the FF control, as shown in Figure 6 [13].

ð e

ð e

NFB�<sup>t</sup> ¼ kP�<sup>N</sup> � e þ kI�<sup>N</sup>

GFB�<sup>t</sup> ¼ kP�<sup>G</sup> � e þ kI�<sup>G</sup>

Then, the total DO control actions of N and G are shown by Eqs. (77) and (78)

Figure 6. Simulation of DO FF-FB control and substrate FB control with prediction error and noise. The model predictions with 5% randomized noises and 20% over estimate of the cell growth in FF control.

## 5. Conclusion

Modeling and simulation are useful tools for understanding, analysis, and optimization of bioprocesses [14–17]. By using the modeling and computer simulation methods, the dynamics of cell growth and metabolism under different conditions and various fermenter operation modes can be evaluated and the information can be used for bioprocess optimization and bioreactor control.

[3] Moser, A. (1994) Bioprocess technology - kinetics and reactors. Translation (Chinese) Qu, Y., pp. 240-367. Hu Nan Science Press, China and Springer-Verlag, New York, USA [4] Tsoularis A, Wallace J. Analysis of logistic growth models. Mathematical Biosciences.

Computer Simulation of Bioprocess http://dx.doi.org/10.5772/67732 115

[5] Lin J, Lee SM, Lee HJ, Koo YM. Modeling of typical microbial cell growth in batch

[6] Jia X, Lin Y, Lin H, Gao L, Lin J, Lin J. Modeling and computer simulation of early pulse aeration effects on ethanol fermentation. WIT Transactions on Biomedicine and Health.

[7] Liua CG, Linb YH, Bai FW. A kinetic growth model for Saccharomyces cerevisiae grown under redox potential-controlled very-high-gravity environment. Biochemical Engineer-

[8] Dantigny P. Modeling of the aerobic growth of Saccharomyces cerevisiae on mixtures of glucose and ethanol in continuous culture. Journal of Biotechnology. 1995;4:213-220 [9] Lin Y, Liu G, Lin H, Gao L, Lin J. Analysis of batch and repeated fedbatch productions of Candida utilis cell mass using mathematical modeling method. Electronic Journal of Bio-

[10] Lee SY. High cell-density culture of Escherichia coli. Trends in Biotechnology. 1996;14:98-105 [11] Cutayar JM, Poillon D. High cell density culture of E. coli in a fed-batch system with dissolved oxygen as substrate feed indicator. Biotechnology Letters. 1989;11:155-160 [12] Gao L, Lin Y, Lin H, Jia X, Lin J, Lin J. Bioreactor substrate feeding control using DO stat control strategy: A modeling and computer simulation study. Applied Mechanics and

[13] Gao L, Lin H. Simulation of model based feedforward and feedback control of dissolved oxygen (DO) of microbial repeated fed-batch culture. International Journal of Simulation:

[14] Salehmin MNI, Annuar MSM, Chisti Y. High cell density fed-batch fermentation for the production of a microbial lipase. Biochemical Engineering Journal. 2014;85:8-14

[15] Potvin G, Ahmad A, Zhang Z. Bioprocess engineering aspects of heterologous protein production in Pichia pastoris: A review. Biochemical Engineering Journal. 2012;64:

[16] Guo Q, Liu G, Dong N, Li Q, Lin J, Lin J. Model predictive control of glucose feeding for fed-batch Candida utilis biomass production. Research Journal of BioTechnology. 2013;8

[17] Lin J, Takagi M, Qu Y, Yoshida Y. Possible strategy for on-line monitoring and control of

hybridoma cell culture. Biochemical Engineering Journal, 2002;11:205-209

culture. Biotechnology and Bioprocess Engineering. 2000;5(5):382-385

2002;179(1):21-55

2014;18:576-582

ing Journal. 2011;56:63-68

technology. 2013;16(4)

91-105

(7):3-7

Materials. 2014;541-542:1198-1202

Systems, Science and Technology. 2016;17(14):4.1-4.6

## Acknowledgements

This work was supported by grants from the Natural Science Foundation (Grant No. 31370138, 31570036), the National Basic Research Program (2010CB630902), the Natural Science Foundation (Grant No. 31400093, 31370084, 30800011), the Postdoctoral Science Foundation (Grant No. 2015M580585), and the State Key Laboratory of Microbial Technology Foundation (M2015-03) of China.

## Author details

Jianqun Lin<sup>1</sup> , Ling Gao<sup>2</sup> , Huibin Lin<sup>3</sup> , Yilin Ren<sup>4</sup> , Yutian Lin<sup>5</sup> and Jianqiang Lin<sup>1</sup> \*

\*Address all correspondence to: jianqianglin@sdu.edu.cn

1 State Key Lab of Microbial Technology, School of Life Sciences, Shandong University, Jinan, China

2 Institute of Information Science and Engineering, School of Information Science and Engineering, Shandong Normal University, Jinan, China


## References


[3] Moser, A. (1994) Bioprocess technology - kinetics and reactors. Translation (Chinese) Qu, Y., pp. 240-367. Hu Nan Science Press, China and Springer-Verlag, New York, USA

5. Conclusion

114 Computer Simulation

bioreactor control.

of China.

Author details

, Ling Gao<sup>2</sup>

, Huibin Lin<sup>3</sup>

\*Address all correspondence to: jianqianglin@sdu.edu.cn

Engineering, Shandong Normal University, Jinan, China

3 Shandong Academy of Chinese Medicine, Jinan, China

4 School of Life Sciences, Tsinghua University, Beijing, China

Jianqun Lin<sup>1</sup>

References

(1):109-125

China

Acknowledgements

Modeling and simulation are useful tools for understanding, analysis, and optimization of bioprocesses [14–17]. By using the modeling and computer simulation methods, the dynamics of cell growth and metabolism under different conditions and various fermenter operation modes can be evaluated and the information can be used for bioprocess optimization and

This work was supported by grants from the Natural Science Foundation (Grant No. 31370138, 31570036), the National Basic Research Program (2010CB630902), the Natural Science Foundation (Grant No. 31400093, 31370084, 30800011), the Postdoctoral Science Foundation (Grant No. 2015M580585), and the State Key Laboratory of Microbial Technology Foundation (M2015-03)

, Yilin Ren<sup>4</sup>

2 Institute of Information Science and Engineering, School of Information Science and

5 College of Computer Sciences & Technology, University of Technology, Sydney, Australia

[1] Monod J. The growth of bacterial cultures. Annual Review of Microbiology. 1949;3:371 [2] Buchanan RE. Life phase in a bacterial culture. Journal of Infectious Diseases. 1918;23

1 State Key Lab of Microbial Technology, School of Life Sciences, Shandong University, Jinan,

, Yutian Lin<sup>5</sup> and Jianqiang Lin<sup>1</sup>

\*


**Chapter 6**

**Developing a Hybrid Model and a Multi‐Scale 3D**

Additional information is available at the end of the chapter

**Processes**

Marcin Hojny

http://dx.doi.org/10.5772/67735

measurement methods.

Abstract

1. Introduction

**Concept of Integrated Modelling High‐Temperature**

The chapter presents an idea of constructing a scientific workshop focused on hightemperature processes, based upon a concept of integrated modelling combining the advantages of computer and physical simulations. Examples of physical simulation results aiming at determining necessary material data and high-temperature characteristics for the needs of computer simulations are presented. They are complemented by an outline of numerical 3D models developed and results of test simulations of a 3D solver of the finite element method (FEM) and the smoothed particle hydrodynamics (SPH). This chapter is closed with a short summary, indicating trends in further research focused on a further development of the DEFFEM simulation package and research and

Keywords: extra-high temperature, finite element method (FEM), smoothed particle hydrodynamics (SPH), physical simulation, computer simulation, mushy zone

In recent years, many companies have been working to develop a rolling process with the semi-solid core [1–6]. While thin strip casting combined with subsequent rolling is a simple, improved method of the conventional rolling process, rolling a strip, in which both the solid and liquid phases coexist, is a new process. Cold rolling following simple strand casting is a long process and it is not cost-effective because of energy reasons. For technological reasons, the process should be developed to simplify or eliminate some operations, which would drastically reduce the energy costs. This also involves beneficial environmental impact, due to the reduction of gas emissions. Casting processes followed immediately by rolling have various versions, which depend on the applying companies, and differ with details of industrial installations [7]. However, the need for controlled rolling of strands cast is pointed out.

> © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes**

## Marcin Hojny

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/67735

#### Abstract

The chapter presents an idea of constructing a scientific workshop focused on hightemperature processes, based upon a concept of integrated modelling combining the advantages of computer and physical simulations. Examples of physical simulation results aiming at determining necessary material data and high-temperature characteristics for the needs of computer simulations are presented. They are complemented by an outline of numerical 3D models developed and results of test simulations of a 3D solver of the finite element method (FEM) and the smoothed particle hydrodynamics (SPH). This chapter is closed with a short summary, indicating trends in further research focused on a further development of the DEFFEM simulation package and research and measurement methods.

Keywords: extra-high temperature, finite element method (FEM), smoothed particle hydrodynamics (SPH), physical simulation, computer simulation, mushy zone

## 1. Introduction

In recent years, many companies have been working to develop a rolling process with the semi-solid core [1–6]. While thin strip casting combined with subsequent rolling is a simple, improved method of the conventional rolling process, rolling a strip, in which both the solid and liquid phases coexist, is a new process. Cold rolling following simple strand casting is a long process and it is not cost-effective because of energy reasons. For technological reasons, the process should be developed to simplify or eliminate some operations, which would drastically reduce the energy costs. This also involves beneficial environmental impact, due to the reduction of gas emissions. Casting processes followed immediately by rolling have various versions, which depend on the applying companies, and differ with details of industrial installations [7]. However, the need for controlled rolling of strands cast is pointed out.

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Therefore, results of a computer simulation or a physical simulation of the process analysed will be useful to control the process parameters. This has inspired the author to take up intensive and time-consuming work related to the development of research methods and mathematical models, along with their numerical implementation [7]. The research and scientific work carried out since 2009 as part of two projects financed by Polish academic institutions resulted in the development of modelling concepts integrating physical and computer simulation areas, while providing full or partial exchange of information between those areas. The proposed concept utilizes capabilities of modern thermomechanical simulators of the Gleeble 3800 series in the modelling of steel deformation processes at extra-high temperatures, as well as in the modelling of integrated casting and rolling processes of flat strands with a solidifying core. The other necessary and unique component in the global scale is the original simulation package DEFFEM being developed for a few years, dedicated to two classes of issues, namely the processes of steel deformation modelling within a temperature range near the solidus line, and within a temperature range between the solidus and liquidus temperatures. The mentioned two classes of issues have been jointly classified as high-temperature processes. The whole modelling approach is complemented by the utilisation of modern testing and measurement instruments to verify the implemented solutions or to obtain additional information that cannot be obtained with traditional methods. This required adopting a number of simplifications and assumptions. Most important was systematic pursuit of the assumed goals and recent research in the context of planned work aiming at developing two unique numerical models: a full three-dimensional multi-scale model MCFE (Monte Carlo and finite elements) and a hybrid model combining the advantages of the finite element method (FEM) and the smoothed particle hydrodynamics (SPH). The author's research is therefore pioneering both in theory and practice. The general outline of the developed concept is presented in Figure 1. Coupling and exchange of information

between the areas of physical and computer simulations allows, among others, the necessary data for the needs of numerical simulations (stress-strain curves, characteristics of changes of current intensity versus time, local temperature changes within the sample volume, etc.) to be identified. At the same time, the application of computer simulations using the dedicated simulation system DEFFEM allows the obtained physical simulation findings to be interpreted [7]. A few layers can be specified when analysing the general outline of the concept in Figure 1. The foundation constitutes axisymmetrical solutions, being a key to the developed NIM (numerical identification methodology) for determining mechanical properties of the steels tested, necessary for numerical analyses. Regardless of axisymmetrical solutions, the next layer constitutes 3D solutions, which become important in the context of designing the simulation system of integrated strip casting and rolling. Such approach is determined by the fact that the zone of solid and liquid phases mixing within the volume of the strand rolled

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

119

More details concerning the developed concept of integrated modelling or comprehensive

The original simulation package DEFFEM is developed in accordance with the design philosophy ONEDES (ONEDECisionSoftware) proposed by Hojny [8]. It is based on the assumption that a set of independent modules comprising the DEFFEM package is implemented numerically to enable a virtual test of resistance heating combined with deforming in a wide range of temperatures to be performed, in particular at extra-high temperatures near the solidus line, as well as in conditions of the solid and liquid phase coexistence, without the need for any commercial applications. In Table 1 currently developed modules (solvers) are compared with the ones under development included in the DEFFEM package, along with their classification subject to the adopted numerical simulations (solver class) and the very possibilities for using the solver to simulate specific main features. The developed simulation

> • Convection heat transfer • Transient heat flow

> • Convection heat transfer • Transient heat flow • Resistance heating

> > (MHD) ANSYS software

• Soft reduction simulation and rolling

• Integrated with magnetohydrodynamic module

• Compression tests

physical tests can be found in the recapitulating monograph by Hojny [7].

Module Solver class Main features

method

method

DEFFEM |solver\_AX\_TM v. 2.0 • Tension tests

finite element method

mechanical, finite element

DEFFEM |solver\_2D\_TH v.1.0 2D thermal, finite element

DEFFEM |solver\_AX\_TH v.1.0 Axially symmetric, thermal,

DEF\_SEMI\_SOLID v. 5.0 Axially symmetric, thermo-

often has a very irregular shape [7].

2. The DEFFEM simulation system

Figure 1. General outline of the concept of integrated modelling focused on high-temperature processes.

between the areas of physical and computer simulations allows, among others, the necessary data for the needs of numerical simulations (stress-strain curves, characteristics of changes of current intensity versus time, local temperature changes within the sample volume, etc.) to be identified. At the same time, the application of computer simulations using the dedicated simulation system DEFFEM allows the obtained physical simulation findings to be interpreted [7]. A few layers can be specified when analysing the general outline of the concept in Figure 1. The foundation constitutes axisymmetrical solutions, being a key to the developed NIM (numerical identification methodology) for determining mechanical properties of the steels tested, necessary for numerical analyses. Regardless of axisymmetrical solutions, the next layer constitutes 3D solutions, which become important in the context of designing the simulation system of integrated strip casting and rolling. Such approach is determined by the fact that the zone of solid and liquid phases mixing within the volume of the strand rolled often has a very irregular shape [7].

More details concerning the developed concept of integrated modelling or comprehensive physical tests can be found in the recapitulating monograph by Hojny [7].

## 2. The DEFFEM simulation system

Therefore, results of a computer simulation or a physical simulation of the process analysed will be useful to control the process parameters. This has inspired the author to take up intensive and time-consuming work related to the development of research methods and mathematical models, along with their numerical implementation [7]. The research and scientific work carried out since 2009 as part of two projects financed by Polish academic institutions resulted in the development of modelling concepts integrating physical and computer simulation areas, while providing full or partial exchange of information between those areas. The proposed concept utilizes capabilities of modern thermomechanical simulators of the Gleeble 3800 series in the modelling of steel deformation processes at extra-high temperatures, as well as in the modelling of integrated casting and rolling processes of flat strands with a solidifying core. The other necessary and unique component in the global scale is the original simulation package DEFFEM being developed for a few years, dedicated to two classes of issues, namely the processes of steel deformation modelling within a temperature range near the solidus line, and within a temperature range between the solidus and liquidus temperatures. The mentioned two classes of issues have been jointly classified as high-temperature processes. The whole modelling approach is complemented by the utilisation of modern testing and measurement instruments to verify the implemented solutions or to obtain additional information that cannot be obtained with traditional methods. This required adopting a number of simplifications and assumptions. Most important was systematic pursuit of the assumed goals and recent research in the context of planned work aiming at developing two unique numerical models: a full three-dimensional multi-scale model MCFE (Monte Carlo and finite elements) and a hybrid model combining the advantages of the finite element method (FEM) and the smoothed particle hydrodynamics (SPH). The author's research is therefore pioneering both in theory and practice. The general outline of the developed concept is presented in Figure 1. Coupling and exchange of information

118 Computer Simulation

Figure 1. General outline of the concept of integrated modelling focused on high-temperature processes.

The original simulation package DEFFEM is developed in accordance with the design philosophy ONEDES (ONEDECisionSoftware) proposed by Hojny [8]. It is based on the assumption that a set of independent modules comprising the DEFFEM package is implemented numerically to enable a virtual test of resistance heating combined with deforming in a wide range of temperatures to be performed, in particular at extra-high temperatures near the solidus line, as well as in conditions of the solid and liquid phase coexistence, without the need for any commercial applications. In Table 1 currently developed modules (solvers) are compared with the ones under development included in the DEFFEM package, along with their classification subject to the adopted numerical simulations (solver class) and the very possibilities for using the solver to simulate specific main features. The developed simulation



Table 1. Component modules of the DEFFEM simulation package.

package also provides tools oriented at the full identification of the selected parameters of numerical models on the basis of data coming directly from physical simulations (DEFFEM | inverse module). In parallel with the design of such an advanced simulation tool, the development of visualisation tools and the analysis of findings are being implemented. Advanced numerical algorithms have been developed for plotting the isolines of scalar fields, visualising vector fields or enabling stereoscopic data to be visualised (module DEFFEM |pre&post).

For the execution of the remelting process, additionally a cylindrical quartz shield was applied to prevent potential leakages of liquid steel into the simulator. To carry out numerical simulations, it was necessary to define material properties characterising the specific steel grade. To determine the necessary thermophysical data, a commercial program JMatPro was used. This program determines the requested dependences (resistivity, thermal conductivity and specific heat) on temperature based on the chemical composition. In order to transfer the simulation results into the industrial conditions, a number of ideas and parameters were introduced to characterise mechanical properties of steel in the semi-solid state. As the 'semi-solid state' we refer hereinafter to the material state able to withstand loads (where within a cohesive solid phase skeleton also the liquid phase coexists). The basic high-temperature characteristics

Figure 2. The view of the Gleeble simulator system (A), a cylindrical sample and 'cold' grips (B), a sample for testing the nil strength temperature ðNSTÞ(C), and the view of the installed hexahedral sample in the Gleeble simulator system (D).

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

121

1. Nil strength temperature (NST), at which the material strength determined during steel heating drops to zero. This temperature is determined after applying a very small tensile force, which for the tests carried out with the Gleeble 3800 simulator is about 80 N.

2. Strength recovery temperature (SRT), at which the material regains its strength (>0.05 kg/mm2 <sup>¼</sup> 0.4909 MPa). It is determined during cooling after previous heating of

3. Nil ductility temperature (NDT), at which ductility determined by reduction of area drops

to zero. This temperature is determined during steel heating.

include:

steel to the liquid state.

## 3. High-temperature testing methodology

The testing methodology is illustrated with the currently newly tested steel 11SMn30. According to the adopted concept of integrated modelling, physical simulations are performed with a thermomechanical Gleeble 3800 simulator. Figure 2A presents a view of the simulation system before starting the physical simulation. During the tests, cylindrical samples (Figure 2B/C) and hexahedral samples (Figure 2D) with various dimensions were used, along with copper grips with a long zone of contact with the sample (Figure 2B).

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes http://dx.doi.org/10.5772/67735 121

Figure 2. The view of the Gleeble simulator system (A), a cylindrical sample and 'cold' grips (B), a sample for testing the nil strength temperature ðNSTÞ(C), and the view of the installed hexahedral sample in the Gleeble simulator system (D).

For the execution of the remelting process, additionally a cylindrical quartz shield was applied to prevent potential leakages of liquid steel into the simulator. To carry out numerical simulations, it was necessary to define material properties characterising the specific steel grade. To determine the necessary thermophysical data, a commercial program JMatPro was used. This program determines the requested dependences (resistivity, thermal conductivity and specific heat) on temperature based on the chemical composition. In order to transfer the simulation results into the industrial conditions, a number of ideas and parameters were introduced to characterise mechanical properties of steel in the semi-solid state. As the 'semi-solid state' we refer hereinafter to the material state able to withstand loads (where within a cohesive solid phase skeleton also the liquid phase coexists). The basic high-temperature characteristics include:

package also provides tools oriented at the full identification of the selected parameters of numerical models on the basis of data coming directly from physical simulations (DEFFEM | inverse module). In parallel with the design of such an advanced simulation tool, the development of visualisation tools and the analysis of findings are being implemented. Advanced numerical algorithms have been developed for plotting the isolines of scalar fields, visualising vector fields or enabling stereoscopic data to be visualised (module DEFFEM |pre&post).

3D fluid, meshless method • Viscosity

• Integrated with temperature field after resistance heating with DEFFEM |solver\_AX\_TH

• Convection heat transfer • Transient heat flow • Resistance heating

• Compression tests • Solidification

• Compression tests • Solidification

• Porous structure • Solidification

• Solidification

The testing methodology is illustrated with the currently newly tested steel 11SMn30. According to the adopted concept of integrated modelling, physical simulations are performed with a thermomechanical Gleeble 3800 simulator. Figure 2A presents a view of the simulation system before starting the physical simulation. During the tests, cylindrical samples (Figure 2B/C) and hexahedral samples (Figure 2D) with various dimensions were used, along with copper grips

3. High-temperature testing methodology

Module Solver class Main features

method

element method

3D multi-scale

method

DEFFEM |pre&post Full pre- and post-processor for all solvers

DEFFEM |inverse Stand-alone inverse module

Table 1. Component modules of the DEFFEM simulation package.

thermomechanical, Monte Carlo þ finite element

3D fluid, meshless method þfinite element method

DEFFEM |solver\_3D\_TH v.1.0 3D thermal, finite element

DEFFEM |solver\_3D\_TM v.1.0 3D thermomechanical, finite

DEFFEM |solver\_3D\_MCFE

DEFFEM |solver\_3D\_FLUID

DEFFEM |solver\_3D\_HYBRID

Supporting module Main features

Beta version

120 Computer Simulation

Beta version

v. 1.0

with a long zone of contact with the sample (Figure 2B).


4. Ductility recovery temperature (DRT), at which the reduction of area achieves the value of ≥5%. The temperature DRT is determined during cooling from a temperature over NDT.

Rf <sup>¼</sup> NST � NDT

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

When the condition NST � NDT < 20C is met, it is assumed that no fracture occurs in steels. As the assumed extreme bottom temperature NST for the steel 11SMn30 tested was 1395�C, the above condition was not met. It indicates that the cast strand shell can break during its formation within the mould, and within the secondary cooling zone. Therefore, we can suppose that this steel is characterised by a high susceptibility to fracture. From the perspective of numerical simulations, the issue of developing a function describing changes in stress versus strain, strain rate and the test nominal temperature becomes very important. The relationships of changes in stress versus strain were determined on the basis of tensile tests. In the experiment, a cylindrical sample was heated to a temperature of 1380�C at a rate of 20�C/s and next at a rate of 1�C/s to 1460�C. The last stage was cooling at a rate of 10�C/s to the deformation temperature. The samples were deformed within the temperature range of 1200–1460�C at

Figures 3 and 4 present the stress-strain curves obtained directly from the experiment for two various tool stroke rates. The issues of determining the mentioned relationships are complex due to the fact that the temperature field within the sample volume is highly heterogeneous.

three various tool stroke rates of 1, 20 and 100 mm/s.

Figure 3. Stress-strain relationships for 11SMn30 grade steel (stroke rate 20 mm/s).

NDT <sup>ð</sup>1<sup>Þ</sup>

http://dx.doi.org/10.5772/67735

123

5. The determination of changes of stress versus strain on the basis of tensile tests for the selected temperature range and the tool stroke rate.

The tests were carried out on plain samples Ø6 82 mm and two-sided threaded samples Ø10 mm 125 mm. On the basis of computation results based on chemical composition, carried out with the JMatPro program, the calculated liquidus Tl and solidus Ts temperatures were 1518 and 1439C, respectively. When determining the temperature NST, initially two tests are made. If the difference between the obtained values NST is >20C, then the third test should be performed. The average value is the temperature NST. In this study, the temperature NST was arbitrary assumed as the mean value for seven samples. The samples for the determination of the nil strength temperature NST first were heated at a rate of 20C/s to a temperature of 1350C, and next at a rate of 1C/s to the temperature of failure. The determined temperature NST for steel 11SMn30 was 1410 15C. The next stage of tests of steel 11SMn30 included determining the nil ductility temperature NDT. The samples for the determination of the nil ductility temperature NDT first were heated at a rate of 20C/s to a temperature of 1300C, and next at a rate of 1C/s to the deformation temperature, using additional 5 s holding at a constant temperature before the deformation process. The samples were tensioned at a rate of 20 mm/s. In steel 11SMn30, the loss of reduction of area was recorded after exceeding the temperature of 1425C; therefore, it was the temperature NDT. The ductility recovery temperature DRT was determined by heating samples to a temperature of 1300C at a rate of 20C/s, and next at a rate of 1C/s to a temperature of 1425C. After 5 s of temperature equalisation, the samples were cooled to the deformation temperature, which was between 1370 and 1420C. The cooling rate was 1C/s. The deformation was preceded by 5 s holding at a set deformation temperature. The samples were tensioned to failure at a rate of 20 mm/s. The temperature DRT value is determined when 5% of reduction of area is recovered. In steel 11SMn30, the reduction of area of 5% was recorded (when cooled) at a deformation temperature of 1400C, therefore it is the temperature DRT. The temperature range between NST and DRT is considered the brittle temperature range [7]. For the steel tested the brittle temperature range (BRT) is about 25C (at a tensioning rate 20 mm/s). The strength recovery temperature SRT was determined by heating samples to a temperature of 1350C at a rate of 20C/s, and next at a rate of 1C/s to a temperature of 1460C (with zonal sample remelting). After 30 s of holding, the samples were cooled to the deformation temperature, which was between 1400 and 1460C. The cooling rate was 10C/s. The deformation was preceded by 5 s holding at a set deformation temperature. The samples were tensioned to failure at a rate of 20 mm/s. The temperature SRT value was determined when a stress of about 0.49 MPa had been recovered. For the conducted tests for steel 11SMn30, it corresponded to the temperature of about 1455C. At determining the stress value, a correction was applied, namely 60 N was deducted from the maximum force value. This correction was related to overcoming the mechanical system resistance to motion. The knowledge of characteristic temperatures also allows us to determine the steel susceptibility to fracture, which is characterised by the following fracture resistance indicator Rf :

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes http://dx.doi.org/10.5772/67735 123

$$R\_f = \frac{\text{NST} - \text{NDT}}{\text{NDT}} \tag{1}$$

When the condition NST � NDT < 20C is met, it is assumed that no fracture occurs in steels. As the assumed extreme bottom temperature NST for the steel 11SMn30 tested was 1395�C, the above condition was not met. It indicates that the cast strand shell can break during its formation within the mould, and within the secondary cooling zone. Therefore, we can suppose that this steel is characterised by a high susceptibility to fracture. From the perspective of numerical simulations, the issue of developing a function describing changes in stress versus strain, strain rate and the test nominal temperature becomes very important. The relationships of changes in stress versus strain were determined on the basis of tensile tests. In the experiment, a cylindrical sample was heated to a temperature of 1380�C at a rate of 20�C/s and next at a rate of 1�C/s to 1460�C. The last stage was cooling at a rate of 10�C/s to the deformation temperature. The samples were deformed within the temperature range of 1200–1460�C at three various tool stroke rates of 1, 20 and 100 mm/s.

4. Ductility recovery temperature (DRT), at which the reduction of area achieves the value of ≥5%. The temperature DRT is determined during cooling from a temperature over NDT.

5. The determination of changes of stress versus strain on the basis of tensile tests for the

The tests were carried out on plain samples Ø6 82 mm and two-sided threaded samples Ø10 mm 125 mm. On the basis of computation results based on chemical composition, carried out with the JMatPro program, the calculated liquidus Tl and solidus Ts temperatures were 1518 and 1439C, respectively. When determining the temperature NST, initially two tests are made. If the difference between the obtained values NST is >20C, then the third test should be performed. The average value is the temperature NST. In this study, the temperature NST was arbitrary assumed as the mean value for seven samples. The samples for the determination of the nil strength temperature NST first were heated at a rate of 20C/s to a temperature of 1350C, and next at a rate of 1C/s to the temperature of failure. The determined temperature NST for steel 11SMn30 was 1410 15C. The next stage of tests of steel 11SMn30 included determining the nil ductility temperature NDT. The samples for the determination of the nil ductility temperature NDT first were heated at a rate of 20C/s to a temperature of 1300C, and next at a rate of 1C/s to the deformation temperature, using additional 5 s holding at a constant temperature before the deformation process. The samples were tensioned at a rate of 20 mm/s. In steel 11SMn30, the loss of reduction of area was recorded after exceeding the temperature of 1425C; therefore, it was the temperature NDT. The ductility recovery temperature DRT was determined by heating samples to a temperature of 1300C at a rate of 20C/s, and next at a rate of 1C/s to a temperature of 1425C. After 5 s of temperature equalisation, the samples were cooled to the deformation temperature, which was between 1370 and 1420C. The cooling rate was 1C/s. The deformation was preceded by 5 s holding at a set deformation temperature. The samples were tensioned to failure at a rate of 20 mm/s. The temperature DRT value is determined when 5% of reduction of area is recovered. In steel 11SMn30, the reduction of area of 5% was recorded (when cooled) at a deformation temperature of 1400C, therefore it is the temperature DRT. The temperature range between NST and DRT is considered the brittle temperature range [7]. For the steel tested the brittle temperature range (BRT) is about 25C (at a tensioning rate 20 mm/s). The strength recovery temperature SRT was determined by heating samples to a temperature of 1350C at a rate of 20C/s, and next at a rate of 1C/s to a temperature of 1460C (with zonal sample remelting). After 30 s of holding, the samples were cooled to the deformation temperature, which was between 1400 and 1460C. The cooling rate was 10C/s. The deformation was preceded by 5 s holding at a set deformation temperature. The samples were tensioned to failure at a rate of 20 mm/s. The temperature SRT value was determined when a stress of about 0.49 MPa had been recovered. For the conducted tests for steel 11SMn30, it corresponded to the temperature of about 1455C. At determining the stress value, a correction was applied, namely 60 N was deducted from the maximum force value. This correction was related to overcoming the mechanical system resistance to motion. The knowledge of characteristic temperatures also allows us to determine the steel susceptibility to fracture, which is characterised by the follow-

selected temperature range and the tool stroke rate.

122 Computer Simulation

ing fracture resistance indicator Rf :

Figures 3 and 4 present the stress-strain curves obtained directly from the experiment for two various tool stroke rates. The issues of determining the mentioned relationships are complex due to the fact that the temperature field within the sample volume is highly heterogeneous.

Figure 3. Stress-strain relationships for 11SMn30 grade steel (stroke rate 20 mm/s).

from the application of axially symmetrical models. In the light of the above inconveniences, a new methodology DIM (direct identification methodology) was developed. This methodology is presented in detail in the monograph [7]. The DIM utilises data directly recorded by the Gleeble 3800 simulator and the capabilities of the DEFFEM simulation package (DEFFEM | inverse module). The new methodology has a great advantage of being fast and flexible in determining function σ<sup>p</sup> ¼ fðε, ε\_, TÞ parameters, while maintaining a good compatibility of

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

125

In addition, as part of the developed methodology for determining high-temperature characteristics, also macro- and micro-structural tests were performed for each steel tested [7]. Examples of micro-structure of the tested steel 11SMn30 (sample core) deformed at the nominal temperatures of 1350 and 1410�C are presented in Figure 5. The analysis of microstructure shows that in the deformation zone acicular ferrite prevails, because of a high deformation temperature and a relatively high cooling rate. The other components of micro-structure are

This section presents main assumptions of the implemented 3D models within the DEFFEM package. As mentioned before, two alternative model approaches are developed. The first model approach is the multi-scale model MCFE, which enables heating-remelting, and next deforming in conditions of simultaneous solidification to be simulated. The proposed solution consists of three sub-models: a macro model based on the finite element method providing information about macroscopic behaviour of the medium deformed, a micro-model of the grain-growth process, melting and solidification based upon the Monte Carlo. The last third model is a model that couples the macro- and micro-solutions and facilitates exchange of information between them. The other alternatively developed model approach is a hybrid FESPH model of melting/solidification simulation combining the finite element method (FE)

Figure 5. Micro-structure of the core of a sample deformed at a nominal temperature of 1350 and 1410�C (stroke rate

force parameters between the actual and virtual processes.

bainite and probably martensite.

4. Spatial mathematical models

20 mm/s, magnification 400�).

with the smoothed particle hydrodynamics (SPH).

Figure 4. Stress-strain relationships for 11SMn30 grade steel (stroke rate 100 mm/s).

At temperatures a high, even small local temperature variations cause rapid local changes in the stress values. The originally applied NIM methodology supported with inverse calculations, and its later modifications, used axially symmetrical solutions [7]. Comprehensive experimental research carried out as part of the project showed substantial discrepancies between the measured values (e.g. force, temperature) for identically repeated tests. In addition, the developed methodology of a 3D analysis of the sample remelting zone using a modern computer tomograph NANOTOM 180N showed a very high asymmetry of the remelting zones obtained [7], which also proved a high asymmetry of the temperature field within the sample volume. Therefore, the obtained results and the conducted analyses suggested a question whether the use of the NIM methodology—very time consuming and complex in terms of computing—was reasonable. Another relevant aspect was the quality of the developed functions describing changes in stress versus strain, strain rate and temperature. The conducted variant calculations in the first approach using inverse computing and next a new functional model of resistance heating (presented hereinafter) to determine the temperature field after the resistance heating process have led to completely different graphs of stressstrain changes. The mentioned discrepancies, among others, arose from differences in the distribution of the computed temperature field between the two methods, sometimes reaching a few Celsius degrees just in the zone of deformation, and the limitations themselves that arose from the application of axially symmetrical models. In the light of the above inconveniences, a new methodology DIM (direct identification methodology) was developed. This methodology is presented in detail in the monograph [7]. The DIM utilises data directly recorded by the Gleeble 3800 simulator and the capabilities of the DEFFEM simulation package (DEFFEM | inverse module). The new methodology has a great advantage of being fast and flexible in determining function σ<sup>p</sup> ¼ fðε, ε\_, TÞ parameters, while maintaining a good compatibility of force parameters between the actual and virtual processes.

In addition, as part of the developed methodology for determining high-temperature characteristics, also macro- and micro-structural tests were performed for each steel tested [7]. Examples of micro-structure of the tested steel 11SMn30 (sample core) deformed at the nominal temperatures of 1350 and 1410�C are presented in Figure 5. The analysis of microstructure shows that in the deformation zone acicular ferrite prevails, because of a high deformation temperature and a relatively high cooling rate. The other components of micro-structure are bainite and probably martensite.

Figure 5. Micro-structure of the core of a sample deformed at a nominal temperature of 1350 and 1410�C (stroke rate 20 mm/s, magnification 400�).

## 4. Spatial mathematical models

At temperatures a high, even small local temperature variations cause rapid local changes in the stress values. The originally applied NIM methodology supported with inverse calculations, and its later modifications, used axially symmetrical solutions [7]. Comprehensive experimental research carried out as part of the project showed substantial discrepancies between the measured values (e.g. force, temperature) for identically repeated tests. In addition, the developed methodology of a 3D analysis of the sample remelting zone using a modern computer tomograph NANOTOM 180N showed a very high asymmetry of the remelting zones obtained [7], which also proved a high asymmetry of the temperature field within the sample volume. Therefore, the obtained results and the conducted analyses suggested a question whether the use of the NIM methodology—very time consuming and complex in terms of computing—was reasonable. Another relevant aspect was the quality of the developed functions describing changes in stress versus strain, strain rate and temperature. The conducted variant calculations in the first approach using inverse computing and next a new functional model of resistance heating (presented hereinafter) to determine the temperature field after the resistance heating process have led to completely different graphs of stressstrain changes. The mentioned discrepancies, among others, arose from differences in the distribution of the computed temperature field between the two methods, sometimes reaching a few Celsius degrees just in the zone of deformation, and the limitations themselves that arose

Figure 4. Stress-strain relationships for 11SMn30 grade steel (stroke rate 100 mm/s).

124 Computer Simulation

This section presents main assumptions of the implemented 3D models within the DEFFEM package. As mentioned before, two alternative model approaches are developed. The first model approach is the multi-scale model MCFE, which enables heating-remelting, and next deforming in conditions of simultaneous solidification to be simulated. The proposed solution consists of three sub-models: a macro model based on the finite element method providing information about macroscopic behaviour of the medium deformed, a micro-model of the grain-growth process, melting and solidification based upon the Monte Carlo. The last third model is a model that couples the macro- and micro-solutions and facilitates exchange of information between them. The other alternatively developed model approach is a hybrid FESPH model of melting/solidification simulation combining the finite element method (FE) with the smoothed particle hydrodynamics (SPH).

#### 4.1. Multi-scale model

The macro-solution of the mechanical model is based upon the application of a rigid-plastic model for a physical continuum. As showed by tests under conditions of hot deformation, the share of elastic strain in the strain tensor components is small [9, 10]. Applying the law of conservation of energy for a certain isolated system, which in the case concerned is the volume of the metal deformed, one can find that the total work performed in the system in a time unit is equal to the energy that this system gains in the same time. The energy balance for the zone deformed, referred to a time unit may be expressed by the relationship:

$$J = \int\_{V} (\sigma\_i \dot{\varepsilon}\_i + \lambda \dot{\varepsilon}\_{\text{vol}})dV\tag{2}$$

The characteristics of the current intensity change as a function of heating time can be directly

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

The Monte Carlo method was applied to simulate grain growth, melting and solidification. The main idea of the Monte Carlo technique is to divide a solution domain of the material into three-dimensional lattices of cells, where cells have clearly defined interaction rules between each other. Monte Carlo simulations have probabilistic character and base on minimalise system energy. The equation used for grain growth simulations only considers grain boundary energy. The total energy of the system is the total grain boundary energy, which is calculated as the sum of all links between neighbouring cells with dissimilar states multiplied by the link

<sup>E</sup> <sup>¼</sup> Jgbe <sup>X</sup>

this state swap is calculated using Eq. (5) and is accepted with the probability P:

from various cooling rates achieved locally in individual zones.

<sup>P</sup> <sup>¼</sup> <sup>e</sup>�Δ<sup>E</sup>

(

kT ΔE > 0 1 ΔE ≤ 0

where kT is the parameter and in the context here and is the simulation temperature. The attempted change in cell state is always successful when ΔE ≤ 0. When ΔE > 0, change in cell state is predicted in proportion to the probability P using the Metropolis algorithm [11]. If a randomly generated number Rn from 0 to 1 is less than P, the cell's state is changed into a new one. Otherwise, the cell's state does not change. The very specificity of the heating/remelting process analysed in the Gleeble 3800 simulator system is very similar to the issues related to the welding process and the utilisation of the Monte Carlo method to analyse those processes [11]. Figure 6 presents macro-structures of samples for two extreme cooling rates (maximum and minimum) and for the rate being their average. Here, we can distinguish four zones: remelting zone (RZ), transition zone (TZ), heat impact zone (HIZ) and grip impact zone (GIZ). The analysis of macro-structures obtained by the experiment and the numerical simulations indicates a large diversification of grain size in individual zones. This arises primarily

The solidification process (without deformation) may be modelled using both the Monte Carlo model and model based on Rappaz-Gandin solutions [7]. The concept of modelling the sample solidification process with its simultaneous deformation required developing an innovative

<sup>&</sup>lt;i, <sup>j</sup><sup>&</sup>gt;

where Jgbe is the grain boundary energy, i is each cell ranging from 1 to the total numbers of cells, j is the neighbor of site i ranging from 1 to the number of neighbours of i and δ is the Kronecker delta. The kinetics of grain growth is simulated by the selection of a cell and an attempt to change its state and identifier (the number that identifies affiliation to a specific grain) into the state of the neighbouring grain cell. Cells inside a grain, which do not have neighbours belonging to another grain, cannot change their state. When a grain boundary cell attempts to change its state, it selects randomly a state from one of its neighbouring grains. The change energy accompanying

ð1 � δijÞ ð5Þ

http://dx.doi.org/10.5772/67735

127

ð6Þ

recorded during physical tests with the Gleeble 3800 simulator.

energy as

where σ<sup>i</sup> is the effective stress, ε\_<sup>i</sup> is the effective strain rate, ε\_vol is the volumetric strain rate and λ is the Lagrange multiplier. Discretisation of the functional (2) is performed in a typical finite element manner using hexahedral elements. The solution in the form of temperature field for the macro-model was searched by solving the Fourier equation, which in the general form can be written as follows:

$$\nabla^T(\lambda \nabla T) + \left(Q - c\_p \rho \frac{\partial T}{\partial \tau}\right) = 0\tag{3}$$

where T is the absolute temperature, λ is the thermal conductivity coefficient, Q is the heat generation rate for volume unit, cp is the specific heat, ρ is the density and τ is the time.

The thermomechanical solution was directly adapted to the boundary conditions reflecting the Gleeble 3800 thermomechanical simulator system [7]. It allows an easy and fast verification of the obtained simulation results on the basis of experimental data. In classic rigid-plastic solutions, also the term responsible for the power generated as a result of friction on the metal-tool contact should be included in the power functional (2) [9]. Samples in the Gleeble simulator system are fixed in a rigid manner; therefore, the fraction term (coefficient of friction, μ ¼ 0) was not taken into account in the presented model. The resistance heating method is applied for heating of samples in the simulator system. As part of the project, a solution was developed in which a function associating change in the current intensity characteristics over the time, and resistivity versus temperature, was developed. The heat source efficiency Q in the model discussed is a function of resistance R, which in turn depends on temperature T and function A which represents intensify of heating:

$$Q\_{\phantom{\!\!\!-1}} = f\left(A(\tau)I^2(\tau)\mathcal{R}(T)\right) \tag{4}$$

It corresponds to resistance changing in the actual model, and the internal heat source efficiency changes together with the resistance. When modelling the Joule heat generation, it was assumed that its equivalent in the numerical model will be the voluminal heat source with its power related to the resistance and the square of electric current I during simulation time τ. The characteristics of the current intensity change as a function of heating time can be directly recorded during physical tests with the Gleeble 3800 simulator.

4.1. Multi-scale model

126 Computer Simulation

be written as follows:

function A which represents intensify of heating:

Q ¼ f � AðτÞI 2 ðτÞRðTÞ

The macro-solution of the mechanical model is based upon the application of a rigid-plastic model for a physical continuum. As showed by tests under conditions of hot deformation, the share of elastic strain in the strain tensor components is small [9, 10]. Applying the law of conservation of energy for a certain isolated system, which in the case concerned is the volume of the metal deformed, one can find that the total work performed in the system in a time unit is equal to the energy that this system gains in the same time. The energy balance for the zone

where σ<sup>i</sup> is the effective stress, ε\_<sup>i</sup> is the effective strain rate, ε\_vol is the volumetric strain rate and λ is the Lagrange multiplier. Discretisation of the functional (2) is performed in a typical finite element manner using hexahedral elements. The solution in the form of temperature field for the macro-model was searched by solving the Fourier equation, which in the general form can

where T is the absolute temperature, λ is the thermal conductivity coefficient, Q is the heat generation rate for volume unit, cp is the specific heat, ρ is the density and τ is the time.

The thermomechanical solution was directly adapted to the boundary conditions reflecting the Gleeble 3800 thermomechanical simulator system [7]. It allows an easy and fast verification of the obtained simulation results on the basis of experimental data. In classic rigid-plastic solutions, also the term responsible for the power generated as a result of friction on the metal-tool contact should be included in the power functional (2) [9]. Samples in the Gleeble simulator system are fixed in a rigid manner; therefore, the fraction term (coefficient of friction, μ ¼ 0) was not taken into account in the presented model. The resistance heating method is applied for heating of samples in the simulator system. As part of the project, a solution was developed in which a function associating change in the current intensity characteristics over the time, and resistivity versus temperature, was developed. The heat source efficiency Q in the model discussed is a function of resistance R, which in turn depends on temperature T and

It corresponds to resistance changing in the actual model, and the internal heat source efficiency changes together with the resistance. When modelling the Joule heat generation, it was assumed that its equivalent in the numerical model will be the voluminal heat source with its power related to the resistance and the square of electric current I during simulation time τ.

∂T ∂τ

�

� �

ðσiε\_<sup>i</sup> þ λε\_volÞdV ð2Þ

¼ 0 ð3Þ

ð4Þ

deformed, referred to a time unit may be expressed by the relationship:

J ¼ ð

V

<sup>∇</sup><sup>T</sup>ðλ∇TÞ þ <sup>Q</sup> � cp<sup>ρ</sup>

The Monte Carlo method was applied to simulate grain growth, melting and solidification. The main idea of the Monte Carlo technique is to divide a solution domain of the material into three-dimensional lattices of cells, where cells have clearly defined interaction rules between each other. Monte Carlo simulations have probabilistic character and base on minimalise system energy. The equation used for grain growth simulations only considers grain boundary energy. The total energy of the system is the total grain boundary energy, which is calculated as the sum of all links between neighbouring cells with dissimilar states multiplied by the link energy as

$$E = I\_{\text{gbe}} \sum\_{} (1 - \delta\_{i\bar{\jmath}}) \tag{5}$$

where Jgbe is the grain boundary energy, i is each cell ranging from 1 to the total numbers of cells, j is the neighbor of site i ranging from 1 to the number of neighbours of i and δ is the Kronecker delta. The kinetics of grain growth is simulated by the selection of a cell and an attempt to change its state and identifier (the number that identifies affiliation to a specific grain) into the state of the neighbouring grain cell. Cells inside a grain, which do not have neighbours belonging to another grain, cannot change their state. When a grain boundary cell attempts to change its state, it selects randomly a state from one of its neighbouring grains. The change energy accompanying this state swap is calculated using Eq. (5) and is accepted with the probability P:

$$P = \begin{cases} e^{-\frac{\Delta E}{kT}} & \Delta E > 0 \\ 1 & \Delta E \le 0 \end{cases} \tag{6}$$

where kT is the parameter and in the context here and is the simulation temperature. The attempted change in cell state is always successful when ΔE ≤ 0. When ΔE > 0, change in cell state is predicted in proportion to the probability P using the Metropolis algorithm [11]. If a randomly generated number Rn from 0 to 1 is less than P, the cell's state is changed into a new one. Otherwise, the cell's state does not change. The very specificity of the heating/remelting process analysed in the Gleeble 3800 simulator system is very similar to the issues related to the welding process and the utilisation of the Monte Carlo method to analyse those processes [11]. Figure 6 presents macro-structures of samples for two extreme cooling rates (maximum and minimum) and for the rate being their average. Here, we can distinguish four zones: remelting zone (RZ), transition zone (TZ), heat impact zone (HIZ) and grip impact zone (GIZ). The analysis of macro-structures obtained by the experiment and the numerical simulations indicates a large diversification of grain size in individual zones. This arises primarily from various cooling rates achieved locally in individual zones.

The solidification process (without deformation) may be modelled using both the Monte Carlo model and model based on Rappaz-Gandin solutions [7]. The concept of modelling the sample solidification process with its simultaneous deformation required developing an innovative

MC model and deformation process is modelled by the finite element model. A diagram of the macro-(FE) and micro-(MC) model coupling and the information flow is presented in Figure 7. Coupling between FE model and MC model is realised by the special Fortran subroutine

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

129

Figure 7. Diagram of the macro (FEM) model and micro (MC) model coupling and the information flow.

Figure 6. Macro-structure of samples cooled at various rates (physical simulation, computer simulation).

approach to the coupling of the macro-model with the micro-model. The developed MCFE model is based on the assumption that the cells directly correspond to the finite element integration points. As a result, the grain growth, melting and solidification is modelled by the MC model and deformation process is modelled by the finite element model. A diagram of the macro-(FE) and micro-(MC) model coupling and the information flow is presented in Figure 7. Coupling between FE model and MC model is realised by the special Fortran subroutine

Figure 7. Diagram of the macro (FEM) model and micro (MC) model coupling and the information flow.

approach to the coupling of the macro-model with the micro-model. The developed MCFE model is based on the assumption that the cells directly correspond to the finite element integration points. As a result, the grain growth, melting and solidification is modelled by the

Figure 6. Macro-structure of samples cooled at various rates (physical simulation, computer simulation).

128 Computer Simulation

mappingMCFE(parameters). In each solution step, information from macro-model regarding positions of integration points and calculated temperatures is send to the micro model. The feedback for the macro-model may be the estimated share of liquid and solid phases which is used to modify the flow stress value for the specified integration point during the next solution step. The whole procedure is performed in a loop to the end of simulation.

#### 4.2. Hybrid model

The hybrid model of the solidification process consists of two solution domains and a special coupling model. The solution of the thermal model based on the finite element method (FEM) is the first solution domain. As part of the first domain, the process of controlled heating/ remelting is performed, and next the sample is controlled cooled in the Gleeble 3800 simulator system. The simulation of the solidification process and the liquid steel flow within the solid phase skeleton is performed by the second solution domain based on the smoothed particle hydrodynamics. The governing equations of fluids in the SPH method are based on the Navier-Stokes equations in the Lagrangian form. The main equations are given by [12, 13]

$$\frac{d\rho}{d\tau} = -\rho \nabla \cdot v \tag{7}$$

dvi <sup>d</sup><sup>τ</sup> ¼ �<sup>X</sup> N

tion from Eq. (10) and is given by [12]

j¼1 mj pj ρ2 j þ pi ρ2 i

The viscous force used in this implementation is the viscosity term which was introduced by Monaghan [12] denoted by Πij. Equation (11) shows that the change of the motion of a particle is due to the pressure field, the viscosity and the body forces acting on the fluid. An equation of state is required to calculate the pressure in Eq. (11). The equation of state used in the presented model is a quasi-compressible form, which is calculated using the density calcula-

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

ρref � �<sup>γ</sup>

where ρref is the reference density, c is the speed of sound, β is the magnitude of pressure, γ¼7 for liquid steel. The dynamic particle was selected as a definition of boundary conditions [14].

where H is the enthalpy, λ is the thermal conductivity and T is the temperature. The SPH

where η is a small parameter to prevent singularity when ðri � rjÞ goes to zero. This equation guarantees that the heat flux is automatically continuous across the different material interfaces, such as between the liquid and solid metals. This also allows multiple phases with different conductivities to be accurately simulated [15]. The model coupling the both domains

This section presents examples of results of numerical simulations performed with the DEFFEM package developed. The first simulation range included the process of heating and deforming within temperature ranges close to the solidus line (without the liquid phase). The obtained findings were verified on the basis of the developed methodology of deformation zone measurement using 3D scanning technology with blue light [7]. The second stage of work included simulations concerning fluid mechanics. The obtained findings were verified on the

ðTi � TjÞ

The model of heat conduction is based on the enthalpy method which is given by [15, 16]

dH <sup>d</sup><sup>τ</sup> <sup>¼</sup> <sup>1</sup> ρ

formulation of Eq. (13) is approximated using the SPH which is given by [15]

4λiλ<sup>j</sup> ðλ<sup>i</sup> þ λjÞ

(FEþSPH) is based upon a solution of coupling by fixing particles to the FE nodes.

mj ρi ρj

dHi <sup>d</sup><sup>τ</sup> <sup>¼</sup> <sup>X</sup> j

5. Computer simulations: example results

basis of well-known laws of physics.

� 1 � �

<sup>p</sup> <sup>¼</sup> <sup>β</sup> <sup>ρ</sup>

<sup>þ</sup> <sup>Π</sup>ij ! � <sup>∇</sup>iWij <sup>þ</sup> <sup>F</sup> <sup>ð</sup>11<sup>Þ</sup>

http://dx.doi.org/10.5772/67735

∇ðλ∇TÞ ð13Þ

<sup>2</sup> <sup>þ</sup> <sup>η</sup><sup>2</sup> <sup>ð</sup>14<sup>Þ</sup>

ðri � rjÞ � ∇iWij ðri � rjÞ

ð12Þ

131

$$
\rho \frac{dv}{d\tau} = -\nabla p + \nabla \cdot \theta + \rho F \tag{8}
$$

where τ is the time, v is the velocity, p is the pressure, F is the external force and θ is a second-order tensor containing τij stresses. Equation (7) is the continuity equation, which describes the evolution of the fluid density over time, and Eq. (8) is the momentum equation, which describes the acceleration of the fluid medium. By employing the SPH interpolation given by

$$\{\nabla f(r\_i)\} \approx \sum\_{j=1}^{N} \frac{m\_j}{\rho\_j} f\_j \nabla\_i W(r\_i - r\_{j\prime}h) \tag{9}$$

to Eq. (7), the SPH representation of the continuity equation can be written as follow [12, 13]:

$$\frac{d\rho\_i}{d\tau} = \sum\_{j=1}^{N} m\_j (\upsilon\_i - \upsilon\_j) \cdot \nabla\_i W\_{\vec{\eta}} \tag{10}$$

where mj and ρ<sup>j</sup> are the mass and the density for particle j, respectively, W is the smoothing kernel, index j corresponds to any neighbouring particle of particle i, f <sup>j</sup> is the value of f for particle j, N is the total number of particles and h is the smoothing length that defines the radius of influence around the current particle i.

The momentum equation can be rewritten in the SPH formalism as

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes http://dx.doi.org/10.5772/67735 131

$$\frac{d\upsilon\_i}{d\tau} = -\sum\_{j=1}^{N} m\_j \left(\frac{p\_j}{\rho\_j^2} + \frac{p\_i}{\rho\_i^2} + \Pi\_{ij}\right) \cdot \nabla\_i \mathcal{W}\_{ij} + F \tag{11}$$

The viscous force used in this implementation is the viscosity term which was introduced by Monaghan [12] denoted by Πij. Equation (11) shows that the change of the motion of a particle is due to the pressure field, the viscosity and the body forces acting on the fluid. An equation of state is required to calculate the pressure in Eq. (11). The equation of state used in the presented model is a quasi-compressible form, which is calculated using the density calculation from Eq. (10) and is given by [12]

$$p = \beta \left[ \left( \frac{\rho}{\rho\_{\text{ref}}} \right)^{\circ} - 1 \right] \tag{12}$$

where ρref is the reference density, c is the speed of sound, β is the magnitude of pressure, γ¼7 for liquid steel. The dynamic particle was selected as a definition of boundary conditions [14].

The model of heat conduction is based on the enthalpy method which is given by [15, 16]

$$\frac{dH}{d\tau} = \frac{1}{\rho} \nabla(\lambda \nabla T) \tag{13}$$

where H is the enthalpy, λ is the thermal conductivity and T is the temperature. The SPH formulation of Eq. (13) is approximated using the SPH which is given by [15]

$$\frac{dH\_i}{d\pi} = \sum\_j \frac{m\_j}{\rho\_i \rho\_j} \frac{4\lambda\_i \lambda\_j}{(\lambda\_i + \lambda\_j)} (T\_i - T\_j) \frac{(r\_i - r\_j) \cdot \nabla\_i \mathcal{W}\_{ij}}{(r\_i - r\_j)^2 + \eta^2} \tag{14}$$

where η is a small parameter to prevent singularity when ðri � rjÞ goes to zero. This equation guarantees that the heat flux is automatically continuous across the different material interfaces, such as between the liquid and solid metals. This also allows multiple phases with different conductivities to be accurately simulated [15]. The model coupling the both domains (FEþSPH) is based upon a solution of coupling by fixing particles to the FE nodes.

#### 5. Computer simulations: example results

mappingMCFE(parameters). In each solution step, information from macro-model regarding positions of integration points and calculated temperatures is send to the micro model. The feedback for the macro-model may be the estimated share of liquid and solid phases which is used to modify the flow stress value for the specified integration point during the next solution

The hybrid model of the solidification process consists of two solution domains and a special coupling model. The solution of the thermal model based on the finite element method (FEM) is the first solution domain. As part of the first domain, the process of controlled heating/ remelting is performed, and next the sample is controlled cooled in the Gleeble 3800 simulator system. The simulation of the solidification process and the liquid steel flow within the solid phase skeleton is performed by the second solution domain based on the smoothed particle hydrodynamics. The governing equations of fluids in the SPH method are based on the Navier-Stokes equations in the Lagrangian form. The main equations are given by [12, 13]

dρ

where τ is the time, v is the velocity, p is the pressure, F is the external force and θ is a second-order tensor containing τij stresses. Equation (7) is the continuity equation, which describes the evolution of the fluid density over time, and Eq. (8) is the momentum equation, which describes the acceleration of the fluid medium. By employing the SPH interpolation

ρ dv

〈∇fðriÞ〉 ≈

dρi <sup>d</sup><sup>τ</sup> <sup>¼</sup> <sup>X</sup> N

The momentum equation can be rewritten in the SPH formalism as

radius of influence around the current particle i.

X N

mj ρj f j

to Eq. (7), the SPH representation of the continuity equation can be written as follow [12, 13]:

where mj and ρ<sup>j</sup> are the mass and the density for particle j, respectively, W is the smoothing kernel, index j corresponds to any neighbouring particle of particle i, f <sup>j</sup> is the value of f for particle j, N is the total number of particles and h is the smoothing length that defines the

j¼1

j¼1

<sup>d</sup><sup>τ</sup> ¼ �ρ<sup>∇</sup> � <sup>v</sup> <sup>ð</sup>7<sup>Þ</sup>

∇iWðri � rj, hÞ ð9Þ

mjðvi � vjÞ � ∇iWij ð10Þ

<sup>d</sup><sup>τ</sup> ¼ �∇<sup>p</sup> <sup>þ</sup> <sup>∇</sup> � <sup>θ</sup> <sup>þ</sup> <sup>ρ</sup><sup>F</sup> <sup>ð</sup>8<sup>Þ</sup>

step. The whole procedure is performed in a loop to the end of simulation.

4.2. Hybrid model

130 Computer Simulation

given by

This section presents examples of results of numerical simulations performed with the DEFFEM package developed. The first simulation range included the process of heating and deforming within temperature ranges close to the solidus line (without the liquid phase). The obtained findings were verified on the basis of the developed methodology of deformation zone measurement using 3D scanning technology with blue light [7]. The second stage of work included simulations concerning fluid mechanics. The obtained findings were verified on the basis of well-known laws of physics.

#### 5.1. Test simulations of the DEFFEM|solver\_3D\_TM

The numerical simulations and experiments were carried out with two types of hexahedral samples with a cross-section 10 10 mm and lengths of 100 mm (sample type B) and 125 mm (sample type C) made of steel S355 [7, 17]. The experiments and numerical computations were conducted on the basis of the research methodology presented in study [7]. In the simulation, a sample was heated to a temperature of 1400C at a rate of 20C/s, and next to a temperature of 1450C at a rate of 1C/s. Figures 8–11 present the temperature distribution for the selected heating stages for both sample types. Referring to the observed temperatures at the sample surface and the maximum temperatures achieved in the sample core, one may observe an increasing temperature gradient on its cross-section as the heating time passes. For a sample type B, after 10 s this difference was 4.4C, after 70 s growing up to 33.9C. For the variant with a sample type C (longer), these differences were slightly bigger and were 4.7 and 35.2C, respectively.

In order to verify the obtained results of the heating simulation in the experiments, a thermovision camera was used to record the profile of the temperature change along the sample heating zone (Figure 12).The obtained results of the experiment and simulation are characterised by a parabolic course, and the correct compatibility between the experimental and calculated values. The average relative error calculated for 13 checkpoints was 14.5%. At the last stage, the deformation process was performed, assuming the stroke of 5 mm and the stroke rate of 1 mm, following sample cooling at a rate of 10C/s to the nominal deformation temperature. The deformation experiments were performed for a temperature scope 1200– 1400C. Figure 13 presents the total Z displacement distribution and the general characteristics of boundary conditions identified in the Gleeble simulator system and adapted to the numerical solution [7]. Three main zones can be distinguished for both sample types. The first zone (sample fixed) defines the grip contact area that does not move during the physical simulation

and the second zone (sample movement) is the grip contact zone that moves by a set stroke. The central zone is a free zone with the highest strain accumulation, which for a sample type B

Figure 10. Temperature distribution after 10 s of heating (sample type C, surface temperature 205.405C).

Figure 9. Temperature distribution after 70 s of heating (sample type B, surface temperature 1.399.13C).

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

133

is 39.5 mm long.

Figure 8. Temperature distribution after 10 s of heating (sample type B, surface temperature 198.286C).

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes http://dx.doi.org/10.5772/67735 133

5.1. Test simulations of the DEFFEM|solver\_3D\_TM

respectively.

132 Computer Simulation

The numerical simulations and experiments were carried out with two types of hexahedral samples with a cross-section 10 10 mm and lengths of 100 mm (sample type B) and 125 mm (sample type C) made of steel S355 [7, 17]. The experiments and numerical computations were conducted on the basis of the research methodology presented in study [7]. In the simulation, a sample was heated to a temperature of 1400C at a rate of 20C/s, and next to a temperature of 1450C at a rate of 1C/s. Figures 8–11 present the temperature distribution for the selected heating stages for both sample types. Referring to the observed temperatures at the sample surface and the maximum temperatures achieved in the sample core, one may observe an increasing temperature gradient on its cross-section as the heating time passes. For a sample type B, after 10 s this difference was 4.4C, after 70 s growing up to 33.9C. For the variant with a sample type C (longer), these differences were slightly bigger and were 4.7 and 35.2C,

In order to verify the obtained results of the heating simulation in the experiments, a thermovision camera was used to record the profile of the temperature change along the sample heating zone (Figure 12).The obtained results of the experiment and simulation are characterised by a parabolic course, and the correct compatibility between the experimental and calculated values. The average relative error calculated for 13 checkpoints was 14.5%. At the last stage, the deformation process was performed, assuming the stroke of 5 mm and the stroke rate of 1 mm, following sample cooling at a rate of 10C/s to the nominal deformation temperature. The deformation experiments were performed for a temperature scope 1200– 1400C. Figure 13 presents the total Z displacement distribution and the general characteristics of boundary conditions identified in the Gleeble simulator system and adapted to the numerical solution [7]. Three main zones can be distinguished for both sample types. The first zone (sample fixed) defines the grip contact area that does not move during the physical simulation

Figure 8. Temperature distribution after 10 s of heating (sample type B, surface temperature 198.286C).

Figure 9. Temperature distribution after 70 s of heating (sample type B, surface temperature 1.399.13C).

Figure 10. Temperature distribution after 10 s of heating (sample type C, surface temperature 205.405C).

and the second zone (sample movement) is the grip contact zone that moves by a set stroke. The central zone is a free zone with the highest strain accumulation, which for a sample type B is 39.5 mm long.

Figure 11. Temperature distribution after 70 s of heating (sample type C, surface temperature 1.406.63C).

Figure 12. The temperature profile along the heating zone calculated and determined by the experiment (test nominal temperature 1400C, sample type C).

Analysing the shape and size of the strain zones resulting from the simulation (Figures 14 and 15) and experiment (Figure 16), one may observe that they feature a very high geometrical similarity. To verify the obtained findings, the developed measurement methodology with blue light scanning was applied [7]. Figure 17 presents a map of deviations between the obtained numerical calculation meshes after the deformation process at a temperature of 1200 and 1400C. The sample deformed at 1200C was selected as the standard in the mapping procedure. The analysis of the obtained results indicates an increase in the cross-section of the sample deformed at 1400C, and a slight decrease in the strain zone length compared to the

Figure 14. Strain distribution, ε<sup>z</sup> component (sample type C, test nominal temperature 1200C).

Figure 13. Total Z displacement distribution (sample type B, test nominal temperature 1400C).

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

135

sample deformed at 1200C.

Figures 14 and 15 present the strain distribution (ε<sup>z</sup> component) for the strain variant at two nominal temperatures 1200 and 1400C. Strain cumulates mainly in the central deformation zone, reaching slightly higher values for the deformation test performed at a temperature of 1400C.

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes http://dx.doi.org/10.5772/67735 135

Figure 13. Total Z displacement distribution (sample type B, test nominal temperature 1400C).

Figure 14. Strain distribution, ε<sup>z</sup> component (sample type C, test nominal temperature 1200C).

Figures 14 and 15 present the strain distribution (ε<sup>z</sup> component) for the strain variant at two nominal temperatures 1200 and 1400C. Strain cumulates mainly in the central deformation zone, reaching slightly higher values for the deformation test performed at a temperature of

Figure 12. The temperature profile along the heating zone calculated and determined by the experiment (test nominal

Figure 11. Temperature distribution after 70 s of heating (sample type C, surface temperature 1.406.63C).

1400C.

134 Computer Simulation

temperature 1400C, sample type C).

Analysing the shape and size of the strain zones resulting from the simulation (Figures 14 and 15) and experiment (Figure 16), one may observe that they feature a very high geometrical similarity. To verify the obtained findings, the developed measurement methodology with blue light scanning was applied [7]. Figure 17 presents a map of deviations between the obtained numerical calculation meshes after the deformation process at a temperature of 1200 and 1400C. The sample deformed at 1200C was selected as the standard in the mapping procedure. The analysis of the obtained results indicates an increase in the cross-section of the sample deformed at 1400C, and a slight decrease in the strain zone length compared to the sample deformed at 1200C.

Figure 18. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

137

Figure 19. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type

Figure 20. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type

C, 1200C).

B, 1200C).

C, 1300C).

Figure 15. Strain distribution, ε<sup>z</sup> component (sample type C, test nominal temperature 1400C).

Figure 16. Pictures of samples after deforming at 1200 and 1400C (sample type C).

An analogous methodology was applied to compare the obtained shape of the deformation zone from the computer simulation to the experimental shape (Figures 18–25). An analysis of the obtained deviation maps indicates that the local difference of 1 mm was not exceeded in any case. The maximum absolute value was 0.99 mm for a sample type B deformed at a temperature of 1400C and the minimum absolute value of 0.4583 mm for a sample type B deformed at 1350C.

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes http://dx.doi.org/10.5772/67735 137

Figure 18. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type C, 1200C).

Figure 19. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type B, 1200C).

An analogous methodology was applied to compare the obtained shape of the deformation zone from the computer simulation to the experimental shape (Figures 18–25). An analysis of the obtained deviation maps indicates that the local difference of 1 mm was not exceeded in any case. The maximum absolute value was 0.99 mm for a sample type B deformed at a temperature of 1400C and the minimum absolute value of 0.4583 mm for a sample type B

Figure 17. The deviation map between two finite element meshes for two deformation temperatures (sample type C).

Figure 16. Pictures of samples after deforming at 1200 and 1400C (sample type C).

Figure 15. Strain distribution, ε<sup>z</sup> component (sample type C, test nominal temperature 1400C).

deformed at 1350C.

136 Computer Simulation

Figure 20. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type C, 1300C).

Figure 21. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type B, 1300C).

Figure 22. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type C, 1350C).

5.2. Test simulations of the DEFFEM|solver\_3D\_FLUID

to 0.5 and simulation time: 2.0 s.

C, 1400�C).

B, 1400�C).

Two simulations were carried out as part of the test simulations of the fluid mechanics solver. The first case of free particles fall was carried out in order to check the Runge-Kutta integration scheme used in the implementation. The second one is oriented for a structure impact simulation based on dynamic particles. The main feedback from this case is: can the dynamic particles handle this kind of boundary condition. The schemes of initial geometries of these problems are presented in Figures 26 and 27. Solution domain consists of 7351 (free fall case)/29,791 (barrier case) moving particles and 29,402 (free fall case)/34,443 (barrier case) dynamic particles represents boundary condition as a box given from particle to particle (width ¼ 1.0 m, height ¼ 1.0 m and length ¼ 1.0 m). The initial drop height for free fall case was set 0.3 m. Other parameters adopted as: initial smoothing length¼ 0.024 m, speed of sound 30 m/s, α is equal

Figure 25. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type

Figure 24. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

139

Figure 28 presents the velocity field for a simulation time of 0.2 s, where the solution domain (fluid) did not interact with the substrate defined by dynamic particles representing the boundary condition. The mean particle velocity is around a given value 2 m/s. The current velocity perfectly corresponds to what is expected by the analytical solution (∼1.96 m/s). On

Figure 23. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type B, 1350C).

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes http://dx.doi.org/10.5772/67735 139

Figure 24. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type C, 1400�C).

Figure 25. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type B, 1400�C).

#### 5.2. Test simulations of the DEFFEM|solver\_3D\_FLUID

Figure 21. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type

Figure 22. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type

Figure 23. The deviation map between the finite element mesh and the mesh obtained from the 3D scanner (sample type

B, 1300C).

138 Computer Simulation

C, 1350C).

B, 1350C).

Two simulations were carried out as part of the test simulations of the fluid mechanics solver. The first case of free particles fall was carried out in order to check the Runge-Kutta integration scheme used in the implementation. The second one is oriented for a structure impact simulation based on dynamic particles. The main feedback from this case is: can the dynamic particles handle this kind of boundary condition. The schemes of initial geometries of these problems are presented in Figures 26 and 27. Solution domain consists of 7351 (free fall case)/29,791 (barrier case) moving particles and 29,402 (free fall case)/34,443 (barrier case) dynamic particles represents boundary condition as a box given from particle to particle (width ¼ 1.0 m, height ¼ 1.0 m and length ¼ 1.0 m). The initial drop height for free fall case was set 0.3 m. Other parameters adopted as: initial smoothing length¼ 0.024 m, speed of sound 30 m/s, α is equal to 0.5 and simulation time: 2.0 s.

Figure 28 presents the velocity field for a simulation time of 0.2 s, where the solution domain (fluid) did not interact with the substrate defined by dynamic particles representing the boundary condition. The mean particle velocity is around a given value 2 m/s. The current velocity perfectly corresponds to what is expected by the analytical solution (∼1.96 m/s). On

Figure 26. Solution domains for the 'free fall' test simulation.

Figure 28. Vector velocity field (simulation time 0.20004155454510358 s).

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

141

Figure 29. Vector velocity field (simulation time 1.9600608766537435 s).

Figure 27. Solution domains for the 'barrier' test simulation.

the other hand, Figure 29 presents the velocity field for a simulation time of 1.96 s, where the velocity field has already stabilised.

Analytical solutions and numerical solutions of a particle fall in respect to z position correspond excellent to what is expected by the analytical solution. Starting z position of particle ID:1 ¼ 0.44 m, calculated z position of particle ID:1 ¼ 0.24372551083597468 m after 0.2 s. Analytical solution z position of particle ID:1¼ 0.243718462 m after 0.2 s. This result indicates

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes http://dx.doi.org/10.5772/67735 141

Figure 28. Vector velocity field (simulation time 0.20004155454510358 s).

Figure 29. Vector velocity field (simulation time 1.9600608766537435 s).

the other hand, Figure 29 presents the velocity field for a simulation time of 1.96 s, where the

Analytical solutions and numerical solutions of a particle fall in respect to z position correspond excellent to what is expected by the analytical solution. Starting z position of particle ID:1 ¼ 0.44 m, calculated z position of particle ID:1 ¼ 0.24372551083597468 m after 0.2 s. Analytical solution z position of particle ID:1¼ 0.243718462 m after 0.2 s. This result indicates

velocity field has already stabilised.

Figure 27. Solution domains for the 'barrier' test simulation.

Figure 26. Solution domains for the 'free fall' test simulation.

140 Computer Simulation

that the Runge-Kutta integration scheme used in the implementation works correctly. Figures 30–35 present the selected stages of the free flow simulation taking account of the fluid-structure interaction. The analysis of the obtained results indicates that the implemented

Figure 32. Vector velocity field (simulation time 0.56007371089547675 s).

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

143

Figure 33. Vector velocity field (simulation time 0.96000801664997626 s).

Figure 30. Vector velocity field (simulation time 0.16001791210049551 s).

Figure 31. Vector velocity field (simulation time 0.36004176147624667 s).

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes http://dx.doi.org/10.5772/67735 143

Figure 32. Vector velocity field (simulation time 0.56007371089547675 s).

that the Runge-Kutta integration scheme used in the implementation works correctly. Figures 30–35 present the selected stages of the free flow simulation taking account of the fluid-structure interaction. The analysis of the obtained results indicates that the implemented

Figure 30. Vector velocity field (simulation time 0.16001791210049551 s).

142 Computer Simulation

Figure 31. Vector velocity field (simulation time 0.36004176147624667 s).

Figure 33. Vector velocity field (simulation time 0.96000801664997626 s).

interaction model is correct. We can observe a greater speed when the fluid passes through the barrier defined by dynamic particles. The velocity is increased as a high repulsion force is created by the boundary defined by dynamic particles (Figure 31). In the subsequent steps of the simulation, the fluid in the computing domain slowly equalises and the velocity field slowly stabilises (Figures 34 and 35). However, more tests for various variants are required

Developing a Hybrid Model and a Multi‐Scale 3D Concept of Integrated Modelling High‐Temperature Processes

http://dx.doi.org/10.5772/67735

145

This chapter presents a multi-scale model MCFE (Monte Carlo and finite element) and a hybrid model FESPH combining the advantages of the finite element method (FEM) and the smoothed particle hydrodynamics (SPH). The developed models and methods constitute the basis of the scientific workshop focused on high-temperature processes. A great advantage of a solution like this is the full openness of source codes of the simulation system designed, which will be the basis for developing further new and innovative solutions. The conducted test simulations concerning continuum mechanics (the FEM solution) and fluid mechanics (the SPH solution) indicate the correctness of the implemented mathematical models in the context of determining temperature fields, strains or velocity fields. The DEFFEM package was successfully applied in practical projects accomplished with industrial partners as a design aiding tool. The present tests in collaboration with an industrial plant from the aviation and casting industry will allow us to perform additional

industrial tests and to evaluate the suitability of the software in the computer-aided design.

The research has been supported by the Polish National Science Centre (2012–2017), decision

[1] Bald W et al. (2000) Innovative technologies for strip production. Steel Times Int. 24:

for further detailed verification.

6. Summary

Acknowledgements

Author details

Marcin Hojny

References

16–19.

number: DEC-2011/03/D/ST8/04041.

Address all correspondence to: mhojny@metal.agh.edu.pl

AGH University of Science and Technology, Kraków, Poland

Figure 34. Vector velocity field (simulation time 1.5600837463861228 s).

Figure 35. Vector velocity field (simulation time 2.0000482073234136 s).

interaction model is correct. We can observe a greater speed when the fluid passes through the barrier defined by dynamic particles. The velocity is increased as a high repulsion force is created by the boundary defined by dynamic particles (Figure 31). In the subsequent steps of the simulation, the fluid in the computing domain slowly equalises and the velocity field slowly stabilises (Figures 34 and 35). However, more tests for various variants are required for further detailed verification.

## 6. Summary

Figure 34. Vector velocity field (simulation time 1.5600837463861228 s).

144 Computer Simulation

Figure 35. Vector velocity field (simulation time 2.0000482073234136 s).

This chapter presents a multi-scale model MCFE (Monte Carlo and finite element) and a hybrid model FESPH combining the advantages of the finite element method (FEM) and the smoothed particle hydrodynamics (SPH). The developed models and methods constitute the basis of the scientific workshop focused on high-temperature processes. A great advantage of a solution like this is the full openness of source codes of the simulation system designed, which will be the basis for developing further new and innovative solutions. The conducted test simulations concerning continuum mechanics (the FEM solution) and fluid mechanics (the SPH solution) indicate the correctness of the implemented mathematical models in the context of determining temperature fields, strains or velocity fields. The DEFFEM package was successfully applied in practical projects accomplished with industrial partners as a design aiding tool. The present tests in collaboration with an industrial plant from the aviation and casting industry will allow us to perform additional industrial tests and to evaluate the suitability of the software in the computer-aided design.

## Acknowledgements

The research has been supported by the Polish National Science Centre (2012–2017), decision number: DEC-2011/03/D/ST8/04041.

## Author details

Marcin Hojny

Address all correspondence to: mhojny@metal.agh.edu.pl

AGH University of Science and Technology, Kraków, Poland

## References

[1] Bald W et al. (2000) Innovative technologies for strip production. Steel Times Int. 24: 16–19.

[2] Cook R, Grocock PG, Thomas PM et al. (1995) Development of the twin-roll casting process. J Mater Proc Technol. 55:76–84.

**Chapter 7**

**Computer‐Aided Physical Simulation of the Soft‐**

The chapter presents experimental problems related to research aiming at obtaining data necessary to formulate a physical model of deformation of steel containing a zone consisting of a mixture of the solid and the liquid phases. This issue is strictly related to the application of the soft-reduction process in integrated strip casting and rolling. The original part of the developed methodology is the experiment computer aid performed with the proprietary simulation package DEFFEM. In order to solve problems related to the deformation of materials with a semi-solid core or at extra-high temperatures, comprehensive tests were applied, which covered both physical modelling with a Gleeble 3800 thermo-mechanical simulator and mathematical modelling. Examples of research findings presented in this chapter show that the developed methodology is correct in the context of experiment computer aid and show the need to develop soft-

Keywords: mushy zone, finite elements method, extra-high temperature, resistance

In recent years, a strong trend towards the development of integrated metallurgical processes can be observed. In these processes, the strand is cast to shape and dimensions near the final product and combined with product rolling [1–7]. Processes of integrated strip casting and rolling, which feature less material processing, and a direct connection of the casting process and strand rolling, can be used as an example. Here, problems related to the steel ductility and the formation of an appropriate product microstructure are very relevant. During an integrated process, contrary to classic hot forming processes, the strand is not cooled to a temperature at which austenite disintegrates [1–7]. Therefore, the original structure of the input

> © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**Reduction and Rolling Process**

Additional information is available at the end of the chapter

ware in order to implement full 3D models.

heating, tomography

1. Introduction

http://dx.doi.org/10.5772/intechopen.68606

Marcin Hojny

Abstract


## **Computer‐Aided Physical Simulation of the Soft‐ Reduction and Rolling Process**

Marcin Hojny

[2] Cook R, Grocock PG, Thomas PM et al. (1995) Development of the twin-roll casting

[3] Fan P, Zhou S, Liang X et al. (1997) Thin strip casting of high speed steels. J Mater Proc

[4] Park CM, Kim W, Park GJ. (2003) Thermal analysis of the roll in the strip casting process.

[5] Seo PK, Park K, Kang C. (2004) Semi-solid die casting process with three steps die system.

[6] Watari H, Davey K, Rasgado MT, et al. (2004) Semi-solid manufacturing process of

[7] Hojny M (2017) Modeling of steel deformation in the semi-solid state. Advanced Struc-

[8] Hojny M. (2014) Projektowanie dedykowanych systemów symulacji odkształcania stali

[9] Pietrzyk M. (1992) Metody numeryczne w przeróbce plastycznej metali. AGH, Krakow. [10] GlowackiM. (1998) Termomechaniczno-mikrostrukturalny model walcowania w wykrojach

[11] Yang Z, Sista S et al. (2000) Three dimensional Monte Carlo simulation of grain growth

[12] Monaghan JJ. (1992) Smoothed particle hydrodynamics. Annu Rev Astron Astrophys.

[13] Monaghan JJ. (1994) Simulating free surface flows with SPH. J Comput Phys. 110:

[14] Crespo AAJC et al. (2007) Boundary Conditions Generated by Dynamic Particles in SPH

[15] Cleary PW, Monaghan JJ. (1999) Conduction modelling using smoothed particle hydro-

[16] Faqih RA, Naa CF. (2013) Three-dimensional smoothed particle hydrodynamics simula-

[17] Hojny M, Glowacki M. (2011) Modeling of strain-stress relationship for carbon steel deformed at temperature exceeding hot rolling range. J Eng Mater Technol, 133:021008-

magnesium alloys by twin-roll casting. J Mater Proc Technol. 156:1662–1667.

process. J Mater Proc Technol. 55:76–84.

Mech Res Commun. 30:297–310.

J Mater Proc Technol. 154:442–449.

tured Materials, vol. 78, Springer Switzerland.

w stanie półciekłym. Wzorek, Krakow, Poland.

during GTA welding of titanium. Acta Mater. 48:4813–4825.

Methods. Cmc—Tech Science Press USA 5:173–184.

tion for liquid metal solidification process. Mater. Proc. ISCS.

dynamics. J Comput Phys. 148:227–264.

kształtowych. AGH, Krakow, Poland.

30:543–574.

399–406.

1–021008-7.

Technol 63:792–796.

146 Computer Simulation

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.68606

### Abstract

The chapter presents experimental problems related to research aiming at obtaining data necessary to formulate a physical model of deformation of steel containing a zone consisting of a mixture of the solid and the liquid phases. This issue is strictly related to the application of the soft-reduction process in integrated strip casting and rolling. The original part of the developed methodology is the experiment computer aid performed with the proprietary simulation package DEFFEM. In order to solve problems related to the deformation of materials with a semi-solid core or at extra-high temperatures, comprehensive tests were applied, which covered both physical modelling with a Gleeble 3800 thermo-mechanical simulator and mathematical modelling. Examples of research findings presented in this chapter show that the developed methodology is correct in the context of experiment computer aid and show the need to develop software in order to implement full 3D models.

Keywords: mushy zone, finite elements method, extra-high temperature, resistance heating, tomography

## 1. Introduction

In recent years, a strong trend towards the development of integrated metallurgical processes can be observed. In these processes, the strand is cast to shape and dimensions near the final product and combined with product rolling [1–7]. Processes of integrated strip casting and rolling, which feature less material processing, and a direct connection of the casting process and strand rolling, can be used as an example. Here, problems related to the steel ductility and the formation of an appropriate product microstructure are very relevant. During an integrated process, contrary to classic hot forming processes, the strand is not cooled to a temperature at which austenite disintegrates [1–7]. Therefore, the original structure of the input

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

material to the rolling process is the as-cast structure, which causes serious difficulties during the production cycle. The increasing access to modern Gleeble series physical simulators allows us to use the knowledge obtained during experiment with tests of designing processes of strip integrated casting and rolling [8, 9]. A Gleeble 3800 simulator enables the so-called physical simulation of a specific process to be carried out. The objective of a simulation like this is to use a small sample, which is made of the same material that is used in the production process. Changes of stress, strains and temperatures, the material is subjected to in the actual production process, are reconstructed in the mentioned sample, which most often is cylindrical [10]. On the basis of the performed physical simulation cycle (variants for various cooling rates, stroke rates, etc.), process maps are developed, and then these maps help to estimate the optimal parameters of the process line equipment. Special diagrams are then constructed, where areas with a limited ductility are marked. Knowing such areas thoroughly allows the process parameters to be adjusted so as to avoid potential strand cracking. Despite huge capabilities of thermo-mechanical simulators as regards a simulation combining a physical simulation of solidification with a simultaneous plastic deformation set during, as well as immediately after the total solidification, each steel grade requires separate comprehensive tests. Therefore, a computer simulation is recommended. The main problem with computer simulations involves lack of constitutive equations, which allow the plastic behaviour of the steel tested to be determined. We need to emphasize that testing mechanical and physical properties, as well as the sample deformation itself at such high temperatures is only possible to a limited extent, with a strictly specified test methodology [10]. Therefore, the goal of this study is to develop a methodology to enable a multi-stage simulation combining the steel deformation process in the semi-solid state and in the solid state, as well as to evaluate its suitability for the future research and development work.

specific tested material [10]. The liquidus Tl and solidus Ts temperatures, the nil strength temperature (NST), the nil ductility temperature (NDT) and the ductility recovery temperature (DRT) are the most important. The determined characteristic temperatures allow a thermal map of the process (TMP) to be developed. This map allows us, among others, to determine the temperature ranges in which the liquid phase appears or disappears, or the temperature above which the mechanical properties of the medium analysed degrade. For steel S355 the liquidus Tl and solidus Ts temperatures were 1513 and 1465�C, respectively. The knowledge of characteristic temperatures allows us also to determine the steel susceptibility to fracture,

Rf <sup>¼</sup> NST � NDT

Under the procedure of continuous casting physical simulation [10], it is assumed that the steel tested is not susceptible to cracking when the difference between the temperature NST and NDT is less than just 20�C. Referring to the characteristic temperatures NST and NDT of the steel S355 tested, which are 1448 and 1420�C respectively, it can be observed that this condition is not met. It indicates that the steel tested is susceptible to cracking during the production cycle. More details concerning the determination of temperature characteristics of steel S355 can be found in publications [8–10]. The execution of experiments at temperatures reaching the solidus temperature range required a strictly planned and controlled experiment course. Figure 2 presents the view of a sample at three selected stages of physical simulation, where remelting within the liquidus and solidus temperature range was performed. At the first stage (left) one may observe the explicit remelting zone (a mixture of the solid and liquid phases) being formed. At the second and third stages (middle and right), when the deformation started, an adverse effect of liquid steel blow-out occurred within the remelting phase. In order to eliminate this problem the orientation of the injection nozzles situated perpendicularly to the sample was changed to an

NDT <sup>ð</sup>1<sup>Þ</sup>

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

http://dx.doi.org/10.5772/intechopen.68606

149

which is characterised by the following fracture resistance indicator Rf :

Figure 2. The view of the sample at the selected physical simulation stages (S355 grade steel).

Figure 1. A cylindrical sample used during experiments with the thermocouple location.

## 2. Physical simulation

The experimental tests were conducted with a Gleeble 3800 thermo-mechanical simulator. The basic tests were conducted with cylindrical samples made of steel S355, with a length of 125 mm and a diameter of 10mm, using two types of copper grips, so-called "hot" and "cold" ones. The selection of an appropriate grip type determines the attainable size of the free zone of the sample deformed [10]. For "cold" grips, the free zone is about 30mm, whereas for "hot" grips, it is about 67mm. In addition, the type of the applied grips substantially influences the attainable width of the remelting zone, which increases as the nominal test temperature increases, and the obtained temperature gradient along the heating zone and on the sample cross-section [10]. During the tests, the temperature was recorded as indicated by thermocouples: TC1 (near the place of the sample-grip contact), TC2 (at a distance of 7.5mm from the heating zone centre), TC4 (the centre of the heating zone, control thermocouple) and TC3 (temperature measurement within the sample core) (Figure 1).

Changes in force versus tool movement, and changes in electrical current versus time were additional values measured during the experiments. From the perspective of methodology development one should take into account the temperatures that are characteristic to the

Figure 1. A cylindrical sample used during experiments with the thermocouple location.

material to the rolling process is the as-cast structure, which causes serious difficulties during the production cycle. The increasing access to modern Gleeble series physical simulators allows us to use the knowledge obtained during experiment with tests of designing processes of strip integrated casting and rolling [8, 9]. A Gleeble 3800 simulator enables the so-called physical simulation of a specific process to be carried out. The objective of a simulation like this is to use a small sample, which is made of the same material that is used in the production process. Changes of stress, strains and temperatures, the material is subjected to in the actual production process, are reconstructed in the mentioned sample, which most often is cylindrical [10]. On the basis of the performed physical simulation cycle (variants for various cooling rates, stroke rates, etc.), process maps are developed, and then these maps help to estimate the optimal parameters of the process line equipment. Special diagrams are then constructed, where areas with a limited ductility are marked. Knowing such areas thoroughly allows the process parameters to be adjusted so as to avoid potential strand cracking. Despite huge capabilities of thermo-mechanical simulators as regards a simulation combining a physical simulation of solidification with a simultaneous plastic deformation set during, as well as immediately after the total solidification, each steel grade requires separate comprehensive tests. Therefore, a computer simulation is recommended. The main problem with computer simulations involves lack of constitutive equations, which allow the plastic behaviour of the steel tested to be determined. We need to emphasize that testing mechanical and physical properties, as well as the sample deformation itself at such high temperatures is only possible to a limited extent, with a strictly specified test methodology [10]. Therefore, the goal of this study is to develop a methodology to enable a multi-stage simulation combining the steel deformation process in the semi-solid state and in the solid state, as well as to evaluate its

The experimental tests were conducted with a Gleeble 3800 thermo-mechanical simulator. The basic tests were conducted with cylindrical samples made of steel S355, with a length of 125 mm and a diameter of 10mm, using two types of copper grips, so-called "hot" and "cold" ones. The selection of an appropriate grip type determines the attainable size of the free zone of the sample deformed [10]. For "cold" grips, the free zone is about 30mm, whereas for "hot" grips, it is about 67mm. In addition, the type of the applied grips substantially influences the attainable width of the remelting zone, which increases as the nominal test temperature increases, and the obtained temperature gradient along the heating zone and on the sample cross-section [10]. During the tests, the temperature was recorded as indicated by thermocouples: TC1 (near the place of the sample-grip contact), TC2 (at a distance of 7.5mm from the heating zone centre), TC4 (the centre of the heating zone, control thermocouple) and TC3 (temperature measurement within the

Changes in force versus tool movement, and changes in electrical current versus time were additional values measured during the experiments. From the perspective of methodology development one should take into account the temperatures that are characteristic to the

suitability for the future research and development work.

2. Physical simulation

148 Computer Simulation

sample core) (Figure 1).

specific tested material [10]. The liquidus Tl and solidus Ts temperatures, the nil strength temperature (NST), the nil ductility temperature (NDT) and the ductility recovery temperature (DRT) are the most important. The determined characteristic temperatures allow a thermal map of the process (TMP) to be developed. This map allows us, among others, to determine the temperature ranges in which the liquid phase appears or disappears, or the temperature above which the mechanical properties of the medium analysed degrade. For steel S355 the liquidus Tl and solidus Ts temperatures were 1513 and 1465�C, respectively. The knowledge of characteristic temperatures allows us also to determine the steel susceptibility to fracture, which is characterised by the following fracture resistance indicator Rf :

$$R\_f = \frac{\text{NST} - \text{NDT}}{\text{NDT}} \tag{1}$$

Under the procedure of continuous casting physical simulation [10], it is assumed that the steel tested is not susceptible to cracking when the difference between the temperature NST and NDT is less than just 20�C. Referring to the characteristic temperatures NST and NDT of the steel S355 tested, which are 1448 and 1420�C respectively, it can be observed that this condition is not met. It indicates that the steel tested is susceptible to cracking during the production cycle. More details concerning the determination of temperature characteristics of steel S355 can be found in publications [8–10]. The execution of experiments at temperatures reaching the solidus temperature range required a strictly planned and controlled experiment course. Figure 2 presents the view of a sample at three selected stages of physical simulation, where remelting within the liquidus and solidus temperature range was performed. At the first stage (left) one may observe the explicit remelting zone (a mixture of the solid and liquid phases) being formed. At the second and third stages (middle and right), when the deformation started, an adverse effect of liquid steel blow-out occurred within the remelting phase. In order to eliminate this problem the orientation of the injection nozzles situated perpendicularly to the sample was changed to an

Figure 2. The view of the sample at the selected physical simulation stages (S355 grade steel).

angle of about 45. This change stabilized the injection, by reducing the blowing of the liquid steel out of the sample.

The problem of an uncontrolled leakage was also observed for deformation within the mixed phase using relatively low tool stroke rates between 1 and 20mm/s. Using a tool stroke rate of around 100mm/s allowed us to fully accomplish the assumed experiment plan [10]. Therefore, one may conclude that the obtained pilot simulation results open and indicate new research areas directed towards high stroke rate testing. In a part of the primary tests, in order to reduce the risk of liquid steel leakage into the simulator, a quartz shield was applied with a length of about 30mm, and a gap of 2–3mm along the shield in order to enable thermocouples to be installed (Figure 1). Another problem encountered during the tests was the occurrence of rapid strength changes. One of the basic relationships determining the plastic behaviour of metal at a very high temperature is the dependence of the yield stress on temperature, strain rate and the strain. The plastic deformation at higher temperatures is subject to a number of further restrictions. The material changes its density during cooling, and at a certain temperature range it shows a limited formability or complete lack of ductility. The occurring large inhomogeneity of deformation, and the fact that even small temperature changes in these conditions cause rapid changes in the yield stress, lead to diverging results. Figure 3 presents examples of results of maximum forces achieved during two identical compression tests at the selected nominal temperatures of deformation. On the other hand, Figure 4 presents examples of results of maximum values of tensile stress for two identical tests at the selected nominal temperatures of deformation. The presented results concern tests conducted with steel C45 (carbon content 0.45%). For temperature measurements also slight temperature measurement differences were observed, reaching a few degrees [10]. Therefore, as mentioned before, the sample deformation itself at such high temperatures is only possible to a limited extent, with a strictly specified test methodology.

The adopted test methodology included:

tests.

tion.

• developing and conducting a physical simulation cycle,

the basis of data obtained from physical simulations,

• developing a numerical model of resistance heating in the Gleeble 3800 simulator system, • developing a new methodology of direct determination of the strain-stress relationship on

Figure 4. The maximum values of tensile stress achieved during tests for the selected nominal temperatures of deforma-

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

http://dx.doi.org/10.5772/intechopen.68606

151

• test simulations with the DEFFEM package and experimental verification of the developed methodology. The tests were combined with a cycle of micro and macrostructural

As part of the starting physical simulation, three experiments were carried out. The fundamental difference between these experiments was in the execution of deformation in various phases of the process. In the first test, the sample was deformed in the remelting phase. The experiment schedule consisted of heating the sample to a temperature of 1400�C at a heating rate of 20�C/s, in the next stage, the heating rate was reduced to 1�C/s until the temperature of 1485�C was reached. The third stage was holding at a temperature of 1485�C for 30s in order to stabilize the temperature within the sample volume. The last stage was the deformation in the remelting phase (stroke¼1.2mm and stroke rate¼0.04mm/s). The first three stages of the second test were analogous to the first test. The last stage was the execution of the deformation process (stroke¼ 1.5mm and stroke rate¼0.25mm/s) in the solidification stage, starting when the sample has cooled down. The third test was a combination of the first and the second ones. The deformation was executed in the remelting phase (stroke¼1.2mm and stroke rate¼0.04mm/s), and next in the solidification stage (stroke¼1.5mm and stroke rate¼0.25mm/s). Figures 5–7 present macrostructures of samples for individual tests. Their nature differs depending on the applied tool variant. After the process of sample etching, the occurrence of a columnar zone was found in the

Figure 3. The maximum values of forces achieved during compression tests for the selected nominal temperatures of deformation.

Figure 4. The maximum values of tensile stress achieved during tests for the selected nominal temperatures of deformation.

The adopted test methodology included:

angle of about 45. This change stabilized the injection, by reducing the blowing of the liquid

The problem of an uncontrolled leakage was also observed for deformation within the mixed phase using relatively low tool stroke rates between 1 and 20mm/s. Using a tool stroke rate of around 100mm/s allowed us to fully accomplish the assumed experiment plan [10]. Therefore, one may conclude that the obtained pilot simulation results open and indicate new research areas directed towards high stroke rate testing. In a part of the primary tests, in order to reduce the risk of liquid steel leakage into the simulator, a quartz shield was applied with a length of about 30mm, and a gap of 2–3mm along the shield in order to enable thermocouples to be installed (Figure 1). Another problem encountered during the tests was the occurrence of rapid strength changes. One of the basic relationships determining the plastic behaviour of metal at a very high temperature is the dependence of the yield stress on temperature, strain rate and the strain. The plastic deformation at higher temperatures is subject to a number of further restrictions. The material changes its density during cooling, and at a certain temperature range it shows a limited formability or complete lack of ductility. The occurring large inhomogeneity of deformation, and the fact that even small temperature changes in these conditions cause rapid changes in the yield stress, lead to diverging results. Figure 3 presents examples of results of maximum forces achieved during two identical compression tests at the selected nominal temperatures of deformation. On the other hand, Figure 4 presents examples of results of maximum values of tensile stress for two identical tests at the selected nominal temperatures of deformation. The presented results concern tests conducted with steel C45 (carbon content 0.45%). For temperature measurements also slight temperature measurement differences were observed, reaching a few degrees [10]. Therefore, as mentioned before, the sample deformation itself at such high temperatures is only possible to a limited extent, with a strictly specified test

Figure 3. The maximum values of forces achieved during compression tests for the selected nominal temperatures of

steel out of the sample.

150 Computer Simulation

methodology.

deformation.


As part of the starting physical simulation, three experiments were carried out. The fundamental difference between these experiments was in the execution of deformation in various phases of the process. In the first test, the sample was deformed in the remelting phase. The experiment schedule consisted of heating the sample to a temperature of 1400�C at a heating rate of 20�C/s, in the next stage, the heating rate was reduced to 1�C/s until the temperature of 1485�C was reached. The third stage was holding at a temperature of 1485�C for 30s in order to stabilize the temperature within the sample volume. The last stage was the deformation in the remelting phase (stroke¼1.2mm and stroke rate¼0.04mm/s). The first three stages of the second test were analogous to the first test. The last stage was the execution of the deformation process (stroke¼ 1.5mm and stroke rate¼0.25mm/s) in the solidification stage, starting when the sample has cooled down. The third test was a combination of the first and the second ones. The deformation was executed in the remelting phase (stroke¼1.2mm and stroke rate¼0.04mm/s), and next in the solidification stage (stroke¼1.5mm and stroke rate¼0.25mm/s). Figures 5–7 present macrostructures of samples for individual tests. Their nature differs depending on the applied tool variant. After the process of sample etching, the occurrence of a columnar zone was found in the

Figure 5. Macrostructure of samples deformed in the remelting phase ("hot" and "cold" grips).

Figure 6. Macrostructure of samples deformed in the solidification phase ("hot" and "cold" grips).

remelting zone. The crystals were growing in the heat discharge direction (Figures 5–7). When analysing macrostructures made in the sample longitudinal section, the formation of porous areas of various intensities can be observed (Figures 5–7). Here, the variant of selected experimental tools is very important. For the variant with "hot" grips and deformation in the remelting phase (Figure 5), the formation of a large shrink hole was observed. Application of deformation

in the solidification phase partially eliminated this effect (Figure 6). However, analyses of the macrostructure for the "hot" grip variant still showed small areas of central porosity (Figures 6

Figure 7. Macrostructure of samples deformed in the remelting and solidification phase ("hot" and "cold" grips).

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

http://dx.doi.org/10.5772/intechopen.68606

153

and 7). It may indicate that the applied strain value is insufficient.

Figure 8. Microstructure of samples deformed in the solidification phase (core, "hot" grips).

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process http://dx.doi.org/10.5772/intechopen.68606 153

Figure 7. Macrostructure of samples deformed in the remelting and solidification phase ("hot" and "cold" grips).

Figure 8. Microstructure of samples deformed in the solidification phase (core, "hot" grips).

remelting zone. The crystals were growing in the heat discharge direction (Figures 5–7). When analysing macrostructures made in the sample longitudinal section, the formation of porous areas of various intensities can be observed (Figures 5–7). Here, the variant of selected experimental tools is very important. For the variant with "hot" grips and deformation in the remelting phase (Figure 5), the formation of a large shrink hole was observed. Application of deformation

Figure 6. Macrostructure of samples deformed in the solidification phase ("hot" and "cold" grips).

Figure 5. Macrostructure of samples deformed in the remelting phase ("hot" and "cold" grips).

152 Computer Simulation

in the solidification phase partially eliminated this effect (Figure 6). However, analyses of the macrostructure for the "hot" grip variant still showed small areas of central porosity (Figures 6 and 7). It may indicate that the applied strain value is insufficient.

Figures 8–11 present examples of microstructures of the selected two areas of the sample, i.e. the sample core and a place located next to the sample tip (near the place of thermocouple installation) for both tool variants. Analysing the sample centre microstructure (Figures 8 and 10), we can detail white (slightly needle-shaped) ferrite and/or bainite. The former austenite is mainly martensite formed after cooling to the ambient temperature, and it is more dominating for the simulation variant carried out with cold grips (Figure 10). Martensite is light brown

in the presented microstructures. The analysis of the microstructure of the surface zones (Figures 9 and 11) shows a slightly different nature of the share of individual phases. The obtained differences arise mainly from the cooling rate, which is two times lower for the samples heated in hot grips, and from the difference in the cooling rates between the individ-

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

http://dx.doi.org/10.5772/intechopen.68606

155

Figure 11. Microstructure of samples deformed in the solidification phase (tip, "cold" grips).

The conducted first cycle of physical simulations allowed us to plan and carry out a pilot simulation of the integrated strip casting and rolling process. In the performed simulation, the deformation process was conducted in crystallization phase and finally rough-rolling simulation was made. The main goal of deformation in the crtstallization stage was to ensure the filling of the full volume of the remelted sample as well as to eliminate shrinkage effects. The second cycle of physical simulations included tests aiming at providing data to the computer-aided methodology of determination of the strain-stress relationship, necessary to build the model of changes in stress versus strain, strain rate and temperature for the needs of numerical simulations. The results of computer-aided physical simulations are presented

As part of the computer-aided experiment the proprietary simulation package DEFFEM [10] according to the ONEDES (ONEDEcisionSoftware) was used. More information on the simulation system designed, details of the implemented mathematical models, or definitions of boundary conditions may be found in book [10] or in the author's second chapeter in the

ual zones of the sample deformed.

3. Computer aid of experiment

hereinafter.

current book.

Figure 9. Microstructure of samples deformed in the solidification phase (tip, "hot" grips).

Figure 10. Microstructure of samples deformed in the solidification phase (core, "cold" grips).

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process http://dx.doi.org/10.5772/intechopen.68606 155

Figure 11. Microstructure of samples deformed in the solidification phase (tip, "cold" grips).

in the presented microstructures. The analysis of the microstructure of the surface zones (Figures 9 and 11) shows a slightly different nature of the share of individual phases. The obtained differences arise mainly from the cooling rate, which is two times lower for the samples heated in hot grips, and from the difference in the cooling rates between the individual zones of the sample deformed.

The conducted first cycle of physical simulations allowed us to plan and carry out a pilot simulation of the integrated strip casting and rolling process. In the performed simulation, the deformation process was conducted in crystallization phase and finally rough-rolling simulation was made. The main goal of deformation in the crtstallization stage was to ensure the filling of the full volume of the remelted sample as well as to eliminate shrinkage effects. The second cycle of physical simulations included tests aiming at providing data to the computer-aided methodology of determination of the strain-stress relationship, necessary to build the model of changes in stress versus strain, strain rate and temperature for the needs of numerical simulations. The results of computer-aided physical simulations are presented hereinafter.

## 3. Computer aid of experiment

Figures 8–11 present examples of microstructures of the selected two areas of the sample, i.e. the sample core and a place located next to the sample tip (near the place of thermocouple installation) for both tool variants. Analysing the sample centre microstructure (Figures 8 and 10), we can detail white (slightly needle-shaped) ferrite and/or bainite. The former austenite is mainly martensite formed after cooling to the ambient temperature, and it is more dominating for the simulation variant carried out with cold grips (Figure 10). Martensite is light brown

154 Computer Simulation

Figure 9. Microstructure of samples deformed in the solidification phase (tip, "hot" grips).

Figure 10. Microstructure of samples deformed in the solidification phase (core, "cold" grips).

As part of the computer-aided experiment the proprietary simulation package DEFFEM [10] according to the ONEDES (ONEDEcisionSoftware) was used. More information on the simulation system designed, details of the implemented mathematical models, or definitions of boundary conditions may be found in book [10] or in the author's second chapeter in the current book.

#### 3.1. Modelling the resistance heating

The process of numerical modelling a high-temperature experiment performed with a Gleeble 3800 thermo-mechanical simulator can be broken down into two main stages. At the first stage the process of sample resistance heating and melting is performed according to a set programme. This stage is very important from the perspective of the specificity of the process analysed, where even small temperature changes may locally cause rapid changes in mechanical properties. As accurate estimation of the temperature distribution within the sample volume as possible will significantly determine the quality of the obtained findings, including the determined strain-stress relationships, which ultimately will strongly influence the process force parameters of the deformation process itself (stage 2). In the first modelling approach, a commercial simulation system ANSYS was used, and the obtained final temperature field was then read in by the DEFFEM solver as the initial condition for the deformation process performed [11]. Resistance heating was modelled with an additional magnetohydrodynamical (MHD) module. This additional program extension concerns the effect of electromagnetic field and the electrically conducting medium. The module enables the conductor behaviour influenced by a constant and variable electromagnetic field to be analysed. In this study, the method in which the current density is the result of the solution of the electrical potential equation and Ohm's law was used. The electrical field is described by equation:

$$
\overrightarrow{E} = -\nabla \mathfrak{D} \tag{2}
$$

both types of grips were used. It was found that the grips had a substantial impact on the attainable heating rate to the nominal deformation temperature. Figure 12 presents the temperature changes measured by a numerical sensor placed on the sample surface (1/2 of the heating zone length). Heating simulations for both variants of tools were performed with a constant current intensity of 2000A, assuming the heating time of 66s. The obtained difference of the maximum calculated temperatures for both tool variants was 123C. Regardless of the adopted temperature schedule, grip type or the grade of the steel tested, the distribution of generated

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

http://dx.doi.org/10.5772/intechopen.68606

157

Regardless of the applied grips the temperature gradient was observed both on the crosssection and on the longitudinal section of the sample (Figure 14). For a sample heated with a

Figure 12. Change in the surface temperature of a sample heated with two types of grips (current intensity 2000A).

Figure 13. The distribution of voluminal heat sources in the sample Z axis for the selected resistance heating stages ("hot"

voluminal sources is parabolic (Figure 13).

grips).

where ∅ is the electric potential.

The current density is calculated from Ohm's law.

$$
\vec{j} = \sigma \,\,\vec{E}\tag{3}
$$

where σ is the specific conductance.

For a medium with a high conductivity, the principle of electric charge conservation is met additionally.

$$\begin{array}{c} \begin{array}{c} \begin{array}{c} \begin{array}{c} \text{/} \end{array} \end{array} \end{array} \begin{array}{c} \begin{array}{c} \text{/} \end{array} \end{array} \end{array} \tag{4}$$

The ANSYS program solver solves equations of continuity, moment, energy and electric potential iteratively in a loop. The energy equation contains an additional energy source, Joule's heat.

$$Q = \frac{1}{\sigma} \vec{j} \cdot \vec{j} \tag{5}$$

where Q is the Joule's heat; j is the current density vector.

A number of simulations were performed, where various heating schedules were used and their impact on the obtained final temperature field was analysed. In the model tests alternatively both types of grips were used. It was found that the grips had a substantial impact on the attainable heating rate to the nominal deformation temperature. Figure 12 presents the temperature changes measured by a numerical sensor placed on the sample surface (1/2 of the heating zone length). Heating simulations for both variants of tools were performed with a constant current intensity of 2000A, assuming the heating time of 66s. The obtained difference of the maximum calculated temperatures for both tool variants was 123C. Regardless of the adopted temperature schedule, grip type or the grade of the steel tested, the distribution of generated voluminal sources is parabolic (Figure 13).

3.1. Modelling the resistance heating

156 Computer Simulation

where ∅ is the electric potential.

where σ is the specific conductance.

additionally.

heat.

The current density is calculated from Ohm's law.

where Q is the Joule's heat; j is the current density vector.

The process of numerical modelling a high-temperature experiment performed with a Gleeble 3800 thermo-mechanical simulator can be broken down into two main stages. At the first stage the process of sample resistance heating and melting is performed according to a set programme. This stage is very important from the perspective of the specificity of the process analysed, where even small temperature changes may locally cause rapid changes in mechanical properties. As accurate estimation of the temperature distribution within the sample volume as possible will significantly determine the quality of the obtained findings, including the determined strain-stress relationships, which ultimately will strongly influence the process force parameters of the deformation process itself (stage 2). In the first modelling approach, a commercial simulation system ANSYS was used, and the obtained final temperature field was then read in by the DEFFEM solver as the initial condition for the deformation process performed [11]. Resistance heating was modelled with an additional magnetohydrodynamical (MHD) module. This additional program extension concerns the effect of electromagnetic field and the electrically conducting medium. The module enables the conductor behaviour influenced by a constant and variable electromagnetic field to be analysed. In this study, the method in which the current density is the result of the solution of the electrical potential

equation and Ohm's law was used. The electrical field is described by equation:

E !

> j ! ¼ σ E !

For a medium with a high conductivity, the principle of electric charge conservation is met

The ANSYS program solver solves equations of continuity, moment, energy and electric potential iteratively in a loop. The energy equation contains an additional energy source, Joule's

A number of simulations were performed, where various heating schedules were used and their impact on the obtained final temperature field was analysed. In the model tests alternatively

∇� j !

<sup>Q</sup> <sup>¼</sup> <sup>1</sup> σ j ! � j !

¼ �∇∅ ð2Þ

¼ 0 ð4Þ

ð3Þ

ð5Þ

Regardless of the applied grips the temperature gradient was observed both on the crosssection and on the longitudinal section of the sample (Figure 14). For a sample heated with a

Figure 12. Change in the surface temperature of a sample heated with two types of grips (current intensity 2000A).

Figure 13. The distribution of voluminal heat sources in the sample Z axis for the selected resistance heating stages ("hot" grips).

Q ¼ f Að Þτ I

computer simulations of sample heating to a temperature of 1485�C (steel S355).

mum value of 48�C after heating to a temperature of 1485�C.

at a constant rate of 5�C/s.

with "hot" grips (Figure 19).

couples TC4 and TC1, "hot" grips).

2

The relationship of the current intensity change as a function of heating time are directly recorded during physical tests with Gleeble 3800 thermo-mechanical simulator. Figure 16 presents examples of results in the form of temperature change versus time measured (thermocouples TC1, TC4, see Figure 1) and calculated (numerical sensors) during physical and

The heating process was performed with two heating rates: to the temperature of 1450�C at a rate of 20�C/s, and next at a rate of 1�C/s to the nominal temperature of 1485�C. On the other hand, Figure 17 presents temperature changes versus time measured and calculated during physical and computer simulations of sample heating to the temperature of 1200�C (steel S355)

The obtained graphs feature a very good compatibility between the calculated and experimentally determined temperatures. The estimated relative error oscillated within 2–3%. Figures 18 and 19 present the temperature distribution on the longitudinal section of the sample and the symmetry with respect to the Z axis, after 3 s of heating, and after heating to the test nominal temperature of 1485�C, respectively. The simulations were conducted with "hot" grips. Analysing Figure 18 one can observe an intensive temperature gradient near the place of toolsample contact. The temperature gradient on the sample section at this simulation stage features a practically uniform temperature distribution of around 60�C, achieving its maxi-

The application of the simulation variant using "cold" grips leads to slightly different results (Figure 20). The attainable width of the remelting zone becomes shorter. In addition, one can observe that the obtained gradient on the sample section of 37�C is smaller than for the variant

Figure 16. Temperature changes versus time obtained as a result of the experiment and computer simulation (thermo-

ð Þ<sup>τ</sup> R Tð Þ <sup>ð</sup>6<sup>Þ</sup>

http://dx.doi.org/10.5772/intechopen.68606

159

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

Figure 14. Temperature distribution on the sample section after 66s of heating ("hot" grips, current intensity 2000A).

2000A current for 66s using "hot" grips, the achieved sample surface temperature was 1338C, while the core temperature was 1367C. The temperature difference between the sample core and the surface (nominal temperature) was therefore 29C. The current intensity characteristics is a relevant control parameter in the numerical model, which is decisive to the attainable nominal temperature of the test (possibility for remelting). For instance, a change in the current intensity from 2000 to 1000A caused a decrease in the amount of heat generated within the system, which translated into the attainable maximum temperature of 366C for the sample core (Figure 15).

In the second model approach the ADINA commercial system was used, in which the temperature field solution was based upon the classic solution of Fourier's equation combined with inverse calculations [12, 13]. Assuming a constant current intensity, and on the basis of temperatures measured during the experiments, the values of voluminal heat sources were selected so as to obtain compatibility of results with experimental findings. Alternatively a constant value of voluminal heat sources was assumed (constant for a single simulation time interval, estimated previously with commercial simulation systems, e.g. Figure 15), and next the current intensity was determined. The adopted model assumptions not only allowed results featuring the correct compatibility with the experimental findings to be obtained, but also lead to an ambiguity of the solution. Examples of results using the foregoing approaches can be found in papers [12–15]. Regardless of the adopted model approach the calculations were very time consuming and painstaking. This moment has inspired the author and has resulted in the ONEDES term (ONEDEcisionSoftware) as an approach philosophy for designing dedicated original simulation systems [9, 10]. Intensive support of the Polish National Science Centre as part of research projects and huge amounts of time sacrificed by the author for the implementation of the developed solutions allowed a globally unique tool dedicated for aiding high-temperature processes to be developed. The effect of the design-implementation work was the development of a resistance heating model in the Gleeble 3800 simulator system (ONEDES approach) and the experimental verification methodology. When modelling the Joule heat generation, it was assumed that its equivalent in the numerical model would be a voluminal heat source (function A) with its power proportional to the resistance R and the square of electric current I:

Figure 15. The temperature distribution on the sample section after 66s of heating ("cold" grips, current intensity 1000A).

$$Q\_{\phantom{\alpha}} = f\left(A(\pi)[\hat{I}^2(\pi)R(T)]\right) \tag{6}$$

The relationship of the current intensity change as a function of heating time are directly recorded during physical tests with Gleeble 3800 thermo-mechanical simulator. Figure 16 presents examples of results in the form of temperature change versus time measured (thermocouples TC1, TC4, see Figure 1) and calculated (numerical sensors) during physical and computer simulations of sample heating to a temperature of 1485�C (steel S355).

2000A current for 66s using "hot" grips, the achieved sample surface temperature was 1338C, while the core temperature was 1367C. The temperature difference between the sample core and the surface (nominal temperature) was therefore 29C. The current intensity characteristics is a relevant control parameter in the numerical model, which is decisive to the attainable nominal temperature of the test (possibility for remelting). For instance, a change in the current intensity from 2000 to 1000A caused a decrease in the amount of heat generated within the system, which translated into the attainable maximum temperature of 366C for the sample

Figure 14. Temperature distribution on the sample section after 66s of heating ("hot" grips, current intensity 2000A).

In the second model approach the ADINA commercial system was used, in which the temperature field solution was based upon the classic solution of Fourier's equation combined with inverse calculations [12, 13]. Assuming a constant current intensity, and on the basis of temperatures measured during the experiments, the values of voluminal heat sources were selected so as to obtain compatibility of results with experimental findings. Alternatively a constant value of voluminal heat sources was assumed (constant for a single simulation time interval, estimated previously with commercial simulation systems, e.g. Figure 15), and next the current intensity was determined. The adopted model assumptions not only allowed results featuring the correct compatibility with the experimental findings to be obtained, but also lead to an ambiguity of the solution. Examples of results using the foregoing approaches can be found in papers [12–15]. Regardless of the adopted model approach the calculations were very time consuming and painstaking. This moment has inspired the author and has resulted in the ONEDES term (ONEDEcisionSoftware) as an approach philosophy for designing dedicated original simulation systems [9, 10]. Intensive support of the Polish National Science Centre as part of research projects and huge amounts of time sacrificed by the author for the implementation of the developed solutions allowed a globally unique tool dedicated for aiding high-temperature processes to be developed. The effect of the design-implementation work was the development of a resistance heating model in the Gleeble 3800 simulator system (ONEDES approach) and the experimental verification methodology. When modelling the Joule heat generation, it was assumed that its equivalent in the numerical model would be a voluminal heat source (function A) with its power proportional to the resistance R and the

Figure 15. The temperature distribution on the sample section after 66s of heating ("cold" grips, current intensity 1000A).

core (Figure 15).

158 Computer Simulation

square of electric current I:

The heating process was performed with two heating rates: to the temperature of 1450�C at a rate of 20�C/s, and next at a rate of 1�C/s to the nominal temperature of 1485�C. On the other hand, Figure 17 presents temperature changes versus time measured and calculated during physical and computer simulations of sample heating to the temperature of 1200�C (steel S355) at a constant rate of 5�C/s.

The obtained graphs feature a very good compatibility between the calculated and experimentally determined temperatures. The estimated relative error oscillated within 2–3%. Figures 18 and 19 present the temperature distribution on the longitudinal section of the sample and the symmetry with respect to the Z axis, after 3 s of heating, and after heating to the test nominal temperature of 1485�C, respectively. The simulations were conducted with "hot" grips. Analysing Figure 18 one can observe an intensive temperature gradient near the place of toolsample contact. The temperature gradient on the sample section at this simulation stage features a practically uniform temperature distribution of around 60�C, achieving its maximum value of 48�C after heating to a temperature of 1485�C.

The application of the simulation variant using "cold" grips leads to slightly different results (Figure 20). The attainable width of the remelting zone becomes shorter. In addition, one can observe that the obtained gradient on the sample section of 37�C is smaller than for the variant with "hot" grips (Figure 19).

Figure 16. Temperature changes versus time obtained as a result of the experiment and computer simulation (thermocouples TC4 and TC1, "hot" grips).

Figure 17. Temperature changes versus time obtained as a result of the experiment and computer simulation (thermocouples TC4, TC2 and TC1, "cold" grips, heating rate 5C/s).

3.2. The methodology of direct determination of mechanical properties (S355 grade steel)

grips).

grips).

The problem of determining characteristics describing changes in stress versus strain, strain rate and temperature is a very complex issue. As showed by research presented herein, the repeatability of test as regards anticipating changes in temperature or force parameters may significantly differ between two identical tests (see Figures 3 and 4). In the context of determining mechanical properties, the accuracy of determination of the temperature field becomes therefore particularly significant. It is caused by the fact that even small local temperature variations may cause rapid changes in mechanical properties. One of methods applied by the author was the use of a methodology [Numerical Identification Methodology (NIM)] based upon the inverse solution and a methodology of modelling with a model concept based upon axially symmetrical models [10]. As part of the current project (3D model development), it has been found that this approach is ineffective in terms of computing, as well as of the quality of

Figure 20. Temperature distribution on the sample section after heating to the nominal temperature of 1485C ("cold"

Figure 19. Temperature distribution on the sample section after heating to the nominal temperature of 1485C ("hot"

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

http://dx.doi.org/10.5772/intechopen.68606

161

Figure 18. Temperature distribution on the sample section after 3s of heating ("hot" grips).

The obtained results of computer simulations of the resistance heating process feature a considerable compatibility with the results obtained by physical simulations (for both tool variants). In the implemented numerical solution, couplings of the electrical field and the temperature field were not included directly. The heat that is generated within the sample volume as a result of the electrical current flow is modelled by an internal voluminal heat source. By applaying this approach, the influence of the changing electrical properties of the solution domain (sample volume) on the electrical charge density and local voluminal heat source power could not be analysed. It is difficult to state that taking the thermo-electrical impact into account would constitute significant progress and would allow more precise results to be obtained. More details can be found in publication [10].

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process http://dx.doi.org/10.5772/intechopen.68606 161

Figure 19. Temperature distribution on the sample section after heating to the nominal temperature of 1485C ("hot" grips).

Figure 20. Temperature distribution on the sample section after heating to the nominal temperature of 1485C ("cold" grips).

#### 3.2. The methodology of direct determination of mechanical properties (S355 grade steel)

The obtained results of computer simulations of the resistance heating process feature a considerable compatibility with the results obtained by physical simulations (for both tool variants). In the implemented numerical solution, couplings of the electrical field and the temperature field were not included directly. The heat that is generated within the sample volume as a result of the electrical current flow is modelled by an internal voluminal heat source. By applaying this approach, the influence of the changing electrical properties of the solution domain (sample volume) on the electrical charge density and local voluminal heat source power could not be analysed. It is difficult to state that taking the thermo-electrical impact into account would constitute significant progress and would allow more precise

Figure 17. Temperature changes versus time obtained as a result of the experiment and computer simulation (thermo-

couples TC4, TC2 and TC1, "cold" grips, heating rate 5C/s).

160 Computer Simulation

results to be obtained. More details can be found in publication [10].

Figure 18. Temperature distribution on the sample section after 3s of heating ("hot" grips).

The problem of determining characteristics describing changes in stress versus strain, strain rate and temperature is a very complex issue. As showed by research presented herein, the repeatability of test as regards anticipating changes in temperature or force parameters may significantly differ between two identical tests (see Figures 3 and 4). In the context of determining mechanical properties, the accuracy of determination of the temperature field becomes therefore particularly significant. It is caused by the fact that even small local temperature variations may cause rapid changes in mechanical properties. One of methods applied by the author was the use of a methodology [Numerical Identification Methodology (NIM)] based upon the inverse solution and a methodology of modelling with a model concept based upon axially symmetrical models [10]. As part of the current project (3D model development), it has been found that this approach is ineffective in terms of computing, as well as of the quality of the obtained curves. Depending on the adopted solution of the model resistance heating process in the aspect of the temperature field determination (inverse calculations, temperature field from the commercial program, or the function description presented herein), it leads to slightly different temperature values within the deformation zone. Bearing in mind that mechanical properties rapidly change as a result of small temperature fluctuations, the obtained stress-strain curves are considerably different (their nature). In the context of verification of the yield stress function model, the obtained results of the process force parameters or the deformation zone shape itself showed a convergence with the experimental data. Thus a question comes: which approach in the context of modelling of mechanical properties at such high temperatures is the optimum solution? It was one of the factors that inspired the author to take up implementing and development work in the context of formulating a full multi-scale 3D model of high-temperature effects. Therefore, as part of project work, the Direct Identification Methodology (DIM) to determine the mentioned dependences directly was developed. The DIM utilizes the capabilities of the Gleeble 3800 simulator as regards experimental research, and the original DEFFEM simulation system for identification of model parameters [10]. The proposed research methodology consists of the following stages: In the first stage samples are prepared for tests, along with the installation of measurement thermocouples, and copper grips and the experiment programme are selected [10]. The second stage includes physical tensile tests on the basis of the assumed physical simulation schedule. The experiment programme included heating to a temperature of 1400�C at a rate of 20�C/s, and next to a temperature of 1480�C at a rate of 1�C/s. Finally, cooling to the nominal deformation temperature was made at a rate of 10�C/s, and after holding for 10s at the set temperature the deformation process (tension test) was made with stroke 0.5–2mm and a tool stroke rate of 1 and 20mm/s. In the third stage, the preliminary simulations in order to estimate the length of the deformable zone are performed. On the basis of the preliminary test results, as well as the analysis of the temperature fields, the length L<sup>0</sup> of the effective working zone can be estimated at 20mm. Within this zone, the strain rate and the stress are supposed as uniaxial. The nominal strain εnom and the nominal strain rate ε\_nom are defined as follows:

$$
\varepsilon\_{\text{nom}} = \frac{\Delta L}{L\_0} \tag{7}
$$

presented solution, the function form describing the dependence of the nominal stress on the

The last parameter that should be defined in function (10) is the value of the nominal temperature Tnom. In this study, the nominal temperature was defined as the sample surface temper-

Tnom <sup>¼</sup> <sup>T</sup>exp

In the last stage of the proposed methodology, the objective function for the purpose of identification of the searched parameter vector x ¼ ð Þ α, n, A, m, Q of the function (10) is

> X Npr

σcalc

nom,ijð Þ� <sup>x</sup> <sup>σ</sup>exp

σexp nom,ij " #<sup>2</sup>

nom,ij

j¼1

The searched parameter vector x can be identified by minimization of the objective function (12). Gradient-free optimisation was used to minimize the objective function (12). The identi-

The calculated and deducted nominal stress-strain curves are shown in Figures 21–26. All measurement points obtained from the experiment are included in the graphs, in order to present the scatter of the experimental data. Before using them in the optimisation procedures, such data were previously subjected to a smoothing procedure. A reasonable agreement can be observed between the calculated and the directly predicted nominal stress-strain curves.

The tension simulations were conducted while attempting to reflect the conditions of the conducted experiments as accurately as possible (see point 2 in Direct Identification Methodology). The first simulation variant covered deformation in the solidification phase for the nominal test temperature of 1450�C, tool stroke rates of 1 and 20mm/s, and elongation of 2 mm. The basic aim of the simulation was to evaluate the suitability of the developed function

5.4623641E � 02 0.2089816 1.65Eþ17 0.1855337 511111.2

where Nt, Npr are the number of tensile tests and measurement points, respectively. σcalc

A � �<sup>m</sup>

exp

mQ

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

RTnom � � � � <sup>ð</sup>10<sup>Þ</sup>

http://dx.doi.org/10.5772/intechopen.68606

surf ð11Þ

] m Q [J/mol]

ð12Þ

163

nom, <sup>σ</sup>exp nom

ASINH <sup>ε</sup>\_nom

nominal strain is the function in the following form [10]:

nom <sup>¼</sup> <sup>ε</sup>nom<sup>n</sup> α

ϕð Þ¼ x

The mean relative errors was within range 0.9–8.6%.

] n A [s�<sup>1</sup>

Table 1. Identified parameters by the Direct Identification Methodology (DIM).

3.3. Computer simulations: examples

1 Nt 1 Npr X Nt

are the nominal stress from calculations and experiments, respectively.

fied parameters of vector x, based on the DIM, are presented in Table 1.

i¼1

σexp

ature:

defined [16]:

α [MPa�<sup>1</sup>

where ΔL is the grip stroke (the elongation of the effective working zone at time τ) and L<sup>0</sup> effective working zone.

$$
\dot{\varepsilon}\_{\text{nom}} = \frac{\text{stroke\\_rate}}{L\_0} \tag{8}
$$

The nominal stress is calculated with the following relationship:

$$
\sigma\_{\text{nom}}^{\text{exp}} = \frac{F}{\mathcal{S}\_0} \tag{9}
$$

where F is the tensile force measured by the Gleeble 3800 simulator and S<sup>0</sup> is the original crosssectional area of the sample. In the fourth stage, the yield stress function form is selected. In the presented solution, the function form describing the dependence of the nominal stress on the nominal strain is the function in the following form [10]:

$$
\sigma\_{nom}^{exp} = \frac{\varepsilon\_{nom}}{\alpha} \text{ASINH} \left[ \left( \frac{\dot{\varepsilon}\_{nom}}{A} \right)^{m} \exp \left( \frac{mQ}{RT\_{nom}} \right) \right] \tag{10}
$$

The last parameter that should be defined in function (10) is the value of the nominal temperature Tnom. In this study, the nominal temperature was defined as the sample surface temperature:

$$T\_{nom} = T\_{surf}^{exp} \tag{11}$$

In the last stage of the proposed methodology, the objective function for the purpose of identification of the searched parameter vector x ¼ ð Þ α, n, A, m, Q of the function (10) is defined [16]:

$$\varphi(\mathbf{x}) = \frac{1}{N\_t} \frac{1}{N\_{pr}} \sum\_{i=1}^{N\_t} \sum\_{j=1}^{N\_{pr}} \left[ \frac{\sigma\_{nom,ij}^{calc}(\mathbf{x}) - \sigma\_{nom,ij}^{exp}}{\sigma\_{nom,ij}^{exp}} \right]^2 \tag{12}$$

where Nt, Npr are the number of tensile tests and measurement points, respectively. σcalc nom, <sup>σ</sup>exp nom are the nominal stress from calculations and experiments, respectively.

The searched parameter vector x can be identified by minimization of the objective function (12). Gradient-free optimisation was used to minimize the objective function (12). The identified parameters of vector x, based on the DIM, are presented in Table 1.

The calculated and deducted nominal stress-strain curves are shown in Figures 21–26. All measurement points obtained from the experiment are included in the graphs, in order to present the scatter of the experimental data. Before using them in the optimisation procedures, such data were previously subjected to a smoothing procedure. A reasonable agreement can be observed between the calculated and the directly predicted nominal stress-strain curves. The mean relative errors was within range 0.9–8.6%.

#### 3.3. Computer simulations: examples

the obtained curves. Depending on the adopted solution of the model resistance heating process in the aspect of the temperature field determination (inverse calculations, temperature field from the commercial program, or the function description presented herein), it leads to slightly different temperature values within the deformation zone. Bearing in mind that mechanical properties rapidly change as a result of small temperature fluctuations, the obtained stress-strain curves are considerably different (their nature). In the context of verification of the yield stress function model, the obtained results of the process force parameters or the deformation zone shape itself showed a convergence with the experimental data. Thus a question comes: which approach in the context of modelling of mechanical properties at such high temperatures is the optimum solution? It was one of the factors that inspired the author to take up implementing and development work in the context of formulating a full multi-scale 3D model of high-temperature effects. Therefore, as part of project work, the Direct Identification Methodology (DIM) to determine the mentioned dependences directly was developed. The DIM utilizes the capabilities of the Gleeble 3800 simulator as regards experimental research, and the original DEFFEM simulation system for identification of model parameters [10]. The proposed research methodology consists of the following stages: In the first stage samples are prepared for tests, along with the installation of measurement thermocouples, and copper grips and the experiment programme are selected [10]. The second stage includes physical tensile tests on the basis of the assumed physical simulation schedule. The experiment programme included heating to a temperature of 1400�C at a rate of 20�C/s, and next to a temperature of 1480�C at a rate of 1�C/s. Finally, cooling to the nominal deformation temperature was made at a rate of 10�C/s, and after holding for 10s at the set temperature the deformation process (tension test) was made with stroke 0.5–2mm and a tool stroke rate of 1 and 20mm/s. In the third stage, the preliminary simulations in order to estimate the length of the deformable zone are performed. On the basis of the preliminary test results, as well as the analysis of the temperature fields, the length L<sup>0</sup> of the effective working zone can be estimated at 20mm. Within this zone, the strain rate and the stress are supposed as uniaxial. The nominal

strain εnom and the nominal strain rate ε\_nom are defined as follows:

The nominal stress is calculated with the following relationship:

effective working zone.

162 Computer Simulation

<sup>ε</sup>nom <sup>¼</sup> <sup>Δ</sup><sup>L</sup> L0

where ΔL is the grip stroke (the elongation of the effective working zone at time τ) and L<sup>0</sup>

<sup>ε</sup>\_nom <sup>¼</sup> stroke\_rate L0

where F is the tensile force measured by the Gleeble 3800 simulator and S<sup>0</sup> is the original crosssectional area of the sample. In the fourth stage, the yield stress function form is selected. In the

σexp nom <sup>¼</sup> <sup>F</sup> S0 ð7Þ

ð8Þ

ð9Þ

The tension simulations were conducted while attempting to reflect the conditions of the conducted experiments as accurately as possible (see point 2 in Direct Identification Methodology). The first simulation variant covered deformation in the solidification phase for the nominal test temperature of 1450�C, tool stroke rates of 1 and 20mm/s, and elongation of 2 mm. The basic aim of the simulation was to evaluate the suitability of the developed function


Table 1. Identified parameters by the Direct Identification Methodology (DIM).

Figure 21. Comparison between measured (points) and calculated (line) stress-strain curve at the nominal temperatures of 1300C and the nominal strain rate 0.05 s<sup>1</sup> .

Figure 22. Comparison between measured (points) and calculated (line) stress-strain curve at the nominal temperatures of 1300C and the nominal strain rate 1 s<sup>1</sup> .

describing changes in stress versus strain for the defined temperature range. Figure 27 presents the initial temperature field for the nominal deformation test performed at a temperature of 1450C using "hot" grips. The temperature difference between the sample surface and core was 33C.

were made along the sample maintaining the symmetry with respect to the Z axis. When analysing the temperature field after the resistance heating process (Figure 27), which at the same time is the initial condition for the mechanical solution, and the temperature field after the tensioning process (Figure 28), one may observe a slight reduction of the maximum temperature from 1483 to 1482C. For the analysed temperature range even such small temperature changes within the deformation zone can cause rapid changes in the plastic and

Figure 24. Comparison between measured (points) and calculated (line) stress-strain curve at the nominal temperatures

.

Figure 23. Comparison between measured (points) and calculated (line) stress–strain curve at the nominal temperatures

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

http://dx.doi.org/10.5772/intechopen.68606

165

.

of 1350C and the nominal strain rate 0.05 s<sup>1</sup>

of 1350C and the nominal strain rate 1 s<sup>1</sup>

Figure 28 presents the ultimate temperature distribution and the strain intensity after the tensioning process at a tool stroke rate of 1mm/s and a grip stroke of 2mm. Visualisations Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process http://dx.doi.org/10.5772/intechopen.68606 165

Figure 23. Comparison between measured (points) and calculated (line) stress–strain curve at the nominal temperatures of 1350C and the nominal strain rate 0.05 s<sup>1</sup> .

Figure 24. Comparison between measured (points) and calculated (line) stress-strain curve at the nominal temperatures of 1350C and the nominal strain rate 1 s<sup>1</sup> .

describing changes in stress versus strain for the defined temperature range. Figure 27 presents the initial temperature field for the nominal deformation test performed at a temperature of 1450C using "hot" grips. The temperature difference between the sample surface and core

Figure 22. Comparison between measured (points) and calculated (line) stress-strain curve at the nominal temperatures

Figure 21. Comparison between measured (points) and calculated (line) stress-strain curve at the nominal temperatures

.

.

Figure 28 presents the ultimate temperature distribution and the strain intensity after the tensioning process at a tool stroke rate of 1mm/s and a grip stroke of 2mm. Visualisations

was 33C.

of 1300C and the nominal strain rate 1 s<sup>1</sup>

of 1300C and the nominal strain rate 0.05 s<sup>1</sup>

164 Computer Simulation

were made along the sample maintaining the symmetry with respect to the Z axis. When analysing the temperature field after the resistance heating process (Figure 27), which at the same time is the initial condition for the mechanical solution, and the temperature field after the tensioning process (Figure 28), one may observe a slight reduction of the maximum temperature from 1483 to 1482C. For the analysed temperature range even such small temperature changes within the deformation zone can cause rapid changes in the plastic and

Figure 25. Comparison between measured (points) and calculated (line) stress-strain curve at the nominal temperatures of 1450C and the nominal strain rate 0.05 s<sup>1</sup> .

Figure 26. Comparison between measured (points) and calculated (line) stress-strain curve at the nominal temperatures of 1450C and the nominal strain rate 1 s<sup>1</sup> .

mechanical properties. When analysing the obtained results one may observe a concentration of the maximum strain intensity values in the middle sample part.

In Figures 29 and 30, the comparison between measured and calculated loads at a nominal temperature of 1450C and two stroke rates of 1 and 20mm/s is presented. The presented courses of loads changes feature a fairly large discrepancy between the measured and calculated loads. The obtained simulation results point that the application of the developed Direct Identification Methodology to determine parameters of the function describing changes leads

to the final results. The average error value is at a level of 8.1% for the deformation variant at a temperature of 1450C (stroke rate 1mm/s) and 19.1% for the test at a temperature of 1450C (stroke rate 20mm/s). However, the obtained results indicate a certain non-uniformity of the

Figure 28. Distribution of (a) temperature (b) strain intensity on the cross-section of a sample deformed at the nominal

temperatureof 1450C ("hot" grips, stroke rate 1mm/s, stroke 2mm).

Figure 27. Temperature distribution on the sample section after heating and cooling to the nominal temperature of

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

http://dx.doi.org/10.5772/intechopen.68606

167

1450C ("hot" grips).

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process http://dx.doi.org/10.5772/intechopen.68606 167

Figure 27. Temperature distribution on the sample section after heating and cooling to the nominal temperature of 1450C ("hot" grips).

Figure 28. Distribution of (a) temperature (b) strain intensity on the cross-section of a sample deformed at the nominal temperatureof 1450C ("hot" grips, stroke rate 1mm/s, stroke 2mm).

mechanical properties. When analysing the obtained results one may observe a concentration

Figure 26. Comparison between measured (points) and calculated (line) stress-strain curve at the nominal temperatures

Figure 25. Comparison between measured (points) and calculated (line) stress-strain curve at the nominal temperatures

.

In Figures 29 and 30, the comparison between measured and calculated loads at a nominal temperature of 1450C and two stroke rates of 1 and 20mm/s is presented. The presented courses of loads changes feature a fairly large discrepancy between the measured and calculated loads. The obtained simulation results point that the application of the developed Direct Identification Methodology to determine parameters of the function describing changes leads

of the maximum strain intensity values in the middle sample part.

.

of 1450C and the nominal strain rate 0.05 s<sup>1</sup>

166 Computer Simulation

of 1450C and the nominal strain rate 1 s<sup>1</sup>

to the final results. The average error value is at a level of 8.1% for the deformation variant at a temperature of 1450C (stroke rate 1mm/s) and 19.1% for the test at a temperature of 1450C (stroke rate 20mm/s). However, the obtained results indicate a certain non-uniformity of the

followed by the rough-rolling simulation. Conducting a multi-stage simulation required intensive implementation work and a partial reorganization of numerical codes in order to ensure transfer of strain and stress states, and the temperature field for the needs of the next stage simulation. The pilot physical and computer simulation included heating to a temperature of 1450C at a rate of 20C/s, and next to a temperature of 1485C at a rate of 1C/s in order to remelt the sample. The deformation process (compression) in the crystallisation phase was performed at a stroke rate of 0.25mm/s, a stroke of 1.5mm at a nominal temperature of 1460C. Next, the sample was cooled at an average cooling rate of 50C/s to the nominal rolling temperature of 1000C. In the rolling process the sample was deformed (compressed) at a stroke rate of 1.25mm/s and a stroke of 4.0mm. The obtained results of the pilot numerical simulations were verified by comparison of the calculated and experimentally determined

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

http://dx.doi.org/10.5772/intechopen.68606

169

At both stages the values of maximum forces calculated numerically were higher than the ones determined experimentally. The calculated relative error reached the maximum value of 13.77% for the stage of deformation at the crystallisation phase. Bear in mind that calculations within the DIM were performed for the adopted nominal test temperature equal to the surface temperature, and remember that the sample core temperature was higher by 33C. Therefore, further research is necessary to determine the nominal temperature, e.g. adopting the core temperature as the nominal temperature, which effectively will allow the differences between the process force parameters to be reduced. The other essential fact influencing the obtained discrepancies in results of physical and computer simulations is lack of temperature field symmetry within the sample volume. From the perspective of the essence of the axially symmetrical numerical model and the computing accuracy, the temperature field in each section plane in ideal conditions should be the same or very similar. Figure 31 presents a picture from a NANOTOM N190 tomograph showing the formed porous zone. The visible porous zone formed starting from the sample core, propagating towards the sample surface (place of installation of the control thermocouple TC4, see Figure 1). It is the place where the sample surrounded by a quartz shield has a 2–3mm gap enabling thermocouples to be installed. The other areas are thermally insulated, and the said gap is the main source of disturbances in the heat exchange between the sample and its environment (simulator inside). Therefore, one may conclude that the elimination of the quartz shield combined with the precise control of the process (manual control) will allow lack of the temperature field symmetry to be eliminated. As a result, the obtained results of numerical simulations should feature a

Solidification stage [N] Rolling stage [N]

maximum force values at the individual stages, which are presented in Table 2.

Physical simulation 617 7144 Computer simulation 702 7737 Relative error 13.77% 8.30%

Table 2. Comparison of the maximum forces determined experimentally and numerically at the individual stages.

higher accuracy.

Figure 29. The comparison between measured and calculated loads at a nominal temperature of 1450C and stroke rate of 1mm/s.

Figure 30. The comparison between measured and calculated loads at a nominal temperature of 1450C and stroke rate of 20mm/s.

deformation zone itself, as well as rapid changes in the plastic and mechanical properties along with a temperature change.

In the second variant, a pilot simulation of the integrated strip casting and rolling process was performed where the deformation was performed in two primary phases: crystallization followed by the rough-rolling simulation. Conducting a multi-stage simulation required intensive implementation work and a partial reorganization of numerical codes in order to ensure transfer of strain and stress states, and the temperature field for the needs of the next stage simulation. The pilot physical and computer simulation included heating to a temperature of 1450C at a rate of 20C/s, and next to a temperature of 1485C at a rate of 1C/s in order to remelt the sample. The deformation process (compression) in the crystallisation phase was performed at a stroke rate of 0.25mm/s, a stroke of 1.5mm at a nominal temperature of 1460C. Next, the sample was cooled at an average cooling rate of 50C/s to the nominal rolling temperature of 1000C. In the rolling process the sample was deformed (compressed) at a stroke rate of 1.25mm/s and a stroke of 4.0mm. The obtained results of the pilot numerical simulations were verified by comparison of the calculated and experimentally determined maximum force values at the individual stages, which are presented in Table 2.

At both stages the values of maximum forces calculated numerically were higher than the ones determined experimentally. The calculated relative error reached the maximum value of 13.77% for the stage of deformation at the crystallisation phase. Bear in mind that calculations within the DIM were performed for the adopted nominal test temperature equal to the surface temperature, and remember that the sample core temperature was higher by 33C. Therefore, further research is necessary to determine the nominal temperature, e.g. adopting the core temperature as the nominal temperature, which effectively will allow the differences between the process force parameters to be reduced. The other essential fact influencing the obtained discrepancies in results of physical and computer simulations is lack of temperature field symmetry within the sample volume. From the perspective of the essence of the axially symmetrical numerical model and the computing accuracy, the temperature field in each section plane in ideal conditions should be the same or very similar. Figure 31 presents a picture from a NANOTOM N190 tomograph showing the formed porous zone. The visible porous zone formed starting from the sample core, propagating towards the sample surface (place of installation of the control thermocouple TC4, see Figure 1). It is the place where the sample surrounded by a quartz shield has a 2–3mm gap enabling thermocouples to be installed. The other areas are thermally insulated, and the said gap is the main source of disturbances in the heat exchange between the sample and its environment (simulator inside). Therefore, one may conclude that the elimination of the quartz shield combined with the precise control of the process (manual control) will allow lack of the temperature field symmetry to be eliminated. As a result, the obtained results of numerical simulations should feature a higher accuracy.


deformation zone itself, as well as rapid changes in the plastic and mechanical properties along

Figure 30. The comparison between measured and calculated loads at a nominal temperature of 1450C and stroke rate

Figure 29. The comparison between measured and calculated loads at a nominal temperature of 1450C and stroke rate

In the second variant, a pilot simulation of the integrated strip casting and rolling process was performed where the deformation was performed in two primary phases: crystallization

with a temperature change.

of 20mm/s.

of 1mm/s.

168 Computer Simulation

Table 2. Comparison of the maximum forces determined experimentally and numerically at the individual stages.

of the fact that traditional strip rolling processes in the hot metal forming conditions can be

Computer‐Aided Physical Simulation of the Soft‐Reduction and Rolling Process

http://dx.doi.org/10.5772/intechopen.68606

171

The research has been supported by the Polish National Science Centre (2012–2017), Decision

[1] Project report. (AGH Krakow-IMZ Gliwice); Number: B0–1124, 2010 (not published)

[2] Bald W, et al. Innovative technologies for strip production. Steel Times International.

[3] Cook R, Grocock PG, Thomas PM et al. Development of the twin-roll casting process.

[4] Fan P, Zhou S, Liang X et al. Thin strip casting of high speed steels. Journal of Materials

[5] Park CM, Kim WS, Park GJ. Thermal analysis of the roll in the strip casting process.

[6] Seo PK, Park KJ, Kang CG. Semi-solid die casting process with three steps die system.

[7] Watari H, Davey K, Rasgado MT et al. Semi-solid manufacturing process of magnesium alloys by twin-roll casting. Journal of Materials Processing Technology. 2004;156:1662–

[8] Głowacki M, Hojny M, Kuziak R. Komputerowo wspomagane badania właściwości

[9] Hojny M. Projektowanie dedykowanych systemów symulacji odkształcania stali w stanie

modelled in the flat strain condition.

number: DEC-2011/03/D/ST8/04041

Address all correspondence to: mhojny@metal.agh.edu.pl

AGH University of Science and Technology, Kraków, Poland

Journal of Materials Processing Technology. 1995;55:76–84

Mechanics Research Communications. 2003;30:297–310

Journal of Materials Processing Technology. 2004;154:442–449

mechanicznych stali w stanie półciekłym. Kraków: Wyd. AGH; 2012

Processing Technology. 1997;63:792–796

półciekłym. Krakow, Poland: Wzorek; 2014

Acknowledgements

Author details

Marcin Hojny

References

1667

2000;24:16–19

Figure 31. A longitudinal section with a visible formed porous zone ("hot" grips, cylindrical sample heating zone centre).

## 4. Conclusions

The primary aim presented in this chapter is to show experimental and modelling problems related to research aiming at obtaining data necessary to develop a physical model of steel deformation in the semi-solid state. The computer aid to the experiment, using the process computer simulation, is an inherent part of the presented methodology. It is difficult to imagine experimental research of steel deformed during the final solidification phase without this simulation. This issue is strictly related to the signalled problems related to the application of the soft-reduction process. The formulated resistance heating model in the simulator system and the computer-aided methodology of direct determination of mechanical properties of the steel tested allowed the preliminary concept of multi-stage modelling (integrated casting and rolling process) to be developed with the DEFFEM package. The obtained results in the form of force parameters feature a correct compatibility, albeit constraints resulting from the application of axially symmetrical models indicate new trends in the development of models and methods. In order to fully describe the behaviour of the semi-solid steel during its deformation in the integrated casting and rolling process the constructed mathematical model must be fully three-dimensional. The necessity of application of spacial models arises from the fact of existence of zones in high temperatures: the solid and semi-solid zone, many's the time having a complex geometrical shape. Such models should be applied to the issue concerned regardless of the fact that traditional strip rolling processes in the hot metal forming conditions can be modelled in the flat strain condition.

## Acknowledgements

The research has been supported by the Polish National Science Centre (2012–2017), Decision number: DEC-2011/03/D/ST8/04041

## Author details

Marcin Hojny

Address all correspondence to: mhojny@metal.agh.edu.pl

AGH University of Science and Technology, Kraków, Poland

## References

4. Conclusions

170 Computer Simulation

The primary aim presented in this chapter is to show experimental and modelling problems related to research aiming at obtaining data necessary to develop a physical model of steel deformation in the semi-solid state. The computer aid to the experiment, using the process computer simulation, is an inherent part of the presented methodology. It is difficult to imagine experimental research of steel deformed during the final solidification phase without this simulation. This issue is strictly related to the signalled problems related to the application of the soft-reduction process. The formulated resistance heating model in the simulator system and the computer-aided methodology of direct determination of mechanical properties of the steel tested allowed the preliminary concept of multi-stage modelling (integrated casting and rolling process) to be developed with the DEFFEM package. The obtained results in the form of force parameters feature a correct compatibility, albeit constraints resulting from the application of axially symmetrical models indicate new trends in the development of models and methods. In order to fully describe the behaviour of the semi-solid steel during its deformation in the integrated casting and rolling process the constructed mathematical model must be fully three-dimensional. The necessity of application of spacial models arises from the fact of existence of zones in high temperatures: the solid and semi-solid zone, many's the time having a complex geometrical shape. Such models should be applied to the issue concerned regardless

Figure 31. A longitudinal section with a visible formed porous zone ("hot" grips, cylindrical sample heating zone centre).


[10] Hojny M. Modeling of steel deformation in the semi-solid state. In: Advanced Structured Materials. Vol. 78. Springer, Switzerland; 2017

**Chapter 8**

**Provisional chapter**

**Surrogate Modelling with Sequential Design for**

**Surrogate Modelling with Sequential Design for** 

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

and reproduction in any medium, provided the original work is properly cited.

The computational demands of virtual experiments for modern product development processes can get out of control due to fine resolution and detail incorporation in simu‐ lation packages. These demands for appropriate approximation strategies and reliable selection of evaluations to keep the amount of required evaluations were limited, without compromising on quality and requirements specified upfront. Surrogate models provide an appealing data‐driven strategy to accomplish these goals for applications including design space exploration, optimization, visualization or sensitivity analysis. Extended with sequential design, satisfactory solutions can be identified quickly, greatly motivat‐

**Keywords:** surrogate modelling, sequential design, optimization, sensitivity analysis,

Amongst the countless research domains and the fields of research revolutionized by the mer‐ its of computer simulation, the field of engineering is a prime example as it continues to ben‐ efit greatly from the introduction of computer simulations. During product design, engineers encounter several complex input/output systems, which need to be designed and optimized. Traditionally, several prototypes were required to assure quality criteria that were met or to obtain optimal solutions for design choices and to evaluate the behaviour of products and com‐ ponents under varying conditions. Typically, a single prototype is not sufficient, and lessons

**Expensive Simulation Applications**

**Expensive Simulation Applications**

Joachim van der Herten, Tom Van Steenkiste,

Tom Van Steenkiste, Ivo Couckuyt and Tom

Additional information is available at the end of the chapter

ing the adoption of this technology into the design process.

Additional information is available at the end of the chapter

Ivo Couckuyt and Tom Dhaene

http://dx.doi.org/10.5772/67739

**Abstract**

active learning

**1. Introduction**

Joachim van der Herten,

Dhaene


## **Surrogate Modelling with Sequential Design for Expensive Simulation Applications Surrogate Modelling with Sequential Design for Expensive Simulation Applications**

Joachim van der Herten, Tom Van Steenkiste, Tom Van Steenkiste, Ivo Couckuyt and Tom

Ivo Couckuyt and Tom Dhaene Dhaene

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/67739

Joachim van der Herten,

#### **Abstract**

[10] Hojny M. Modeling of steel deformation in the semi-solid state. In: Advanced Structured

[11] Hojny M, Siwek A. Computer and physical modeling of resistance heating in the Gleeble 3800 simulator. Proc. KomPlasTech conference, 17–20 stycznia, 2016, Wisła, Poland, pp.

[12] Hojny M, Glowacki M. Computer modelling of deformation of steel samples with mushy

[13] Hojny M, Glowacki M. The physical and computer modelling of plastic deformation of low carbon steel in semi-solid state. Journal of Engineering Materials and Technology.

[14] Hojny M, Glowacki M. Modeling of strain-stress relationship for carbon steel deformed at temperature exceeding hot rolling range. Journal of Engineering Materials and Technol-

[15] Hojny M, Głowacki M, Malinowski Z. Computer aided methodology of strain-stress curve construction for steels deformed at extra high temperature. High Temperature

[16] Szyndler D. Problem odwrotny w zastosowaniu do identyfikacji parametrów procesu

Materials. Vol. 78. Springer, Switzerland; 2017

zone. Steel Research International. 2008;79:868–874

Materials and Processes. 2009;28(4):245-252. ISSN 0334–6455

plastycznej przeróbki metali. PhD. thesis. Krakow; 2001

2009;131:041003-1-041003-7

ogy. 2011;133:021008-1-021008-7

66–67

172 Computer Simulation

The computational demands of virtual experiments for modern product development processes can get out of control due to fine resolution and detail incorporation in simu‐ lation packages. These demands for appropriate approximation strategies and reliable selection of evaluations to keep the amount of required evaluations were limited, without compromising on quality and requirements specified upfront. Surrogate models provide an appealing data‐driven strategy to accomplish these goals for applications including design space exploration, optimization, visualization or sensitivity analysis. Extended with sequential design, satisfactory solutions can be identified quickly, greatly motivat‐ ing the adoption of this technology into the design process.

**Keywords:** surrogate modelling, sequential design, optimization, sensitivity analysis, active learning

## **1. Introduction**

Amongst the countless research domains and the fields of research revolutionized by the mer‐ its of computer simulation, the field of engineering is a prime example as it continues to ben‐ efit greatly from the introduction of computer simulations. During product design, engineers encounter several complex input/output systems, which need to be designed and optimized. Traditionally, several prototypes were required to assure quality criteria that were met or to obtain optimal solutions for design choices and to evaluate the behaviour of products and com‐ ponents under varying conditions. Typically, a single prototype is not sufficient, and lessons

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

learnt are used to improve the design, back at the drawing table. Therefore, the development process used involves building several prototypes in order to gain more confidence in the solu‐ tions. A direct consequence of this approach is that the development process is both slow and not cost effective.

In this chapter, the focus is on a data‐driven approach: the response of a simulator as a func‐ tion of its inputs is mimicked by *surrogate models* (metamodels, emulators or response surface models are often encountered as synonyms). These cheap‐to‐evaluate mathematical expres‐ sions can be evaluated efficiently and can replace the simulator. A more extensive overview of usage scenarios and a formal description of surrogate modelling are given in Section 2. Before the surrogate model can be used, it must first be constructed and trained during the surrogate modelling process on a number of well‐chosen evaluations (*samples*) to be evaluated by the simulator in order to satisfy the requirements specified upfront. The problem of selecting an appropriate set of samples is further explored in Section 3. Section 4 briefly introduces an integrated platform for surrogate modelling with sequential design. Finally, these techniques

Surrogate Modelling with Sequential Design for Expensive Simulation Applications

http://dx.doi.org/10.5772/67739

175

The introduction of this chapter highlighted the global idea of approximation and the benefits of introducing an additional layer of abstraction when simulations are expensive. Now, the data‐driven approach (surrogate modelling) is discussed more in depth, both from a usability

The most direct implementation of surrogate modelling is training a globally accurate model over the entire design space. This approximation can then replace the expensive simulation evaluations for a variety of engineering tasks such as design space exploration, parameteriza‐

A different use‐case of surrogate models is sensitivity analysis of the complex system. Especially when many input parameters are present, it is very difficult to achieve global accuracy due to the exponential growth of the input space (known as the *curse of dimensional‐ ity*). Fortunately, not all input parameters contribute equally to the output variability, in fact some might not have any impact at all [3]. The surrogate models can be used directly for evaluation‐based sensitivity analysis methods such as Sobol indices [4], Interaction indices [5] or gradient‐based methods. For some kernel‐based modelling methods, analytical com‐ putation of sensitivity measures is possibly resulting in faster and more reliable estimation

Another task at which surrogates excel is the optimization of expensive objective functions. This discipline is often referred to as surrogate‐based optimization (SBO). A globally accurate surrogate model can be built and optimized using traditional optimization methods such as gradient descent, or metaheuristics such as particle swarm optimization [7]. Although this approach is correct and works faster than simulating each call of the objective function, it is not necessarily the most efficient methodology: when seeking a minimum, less samples can be devoted to the regions that are clearly shown to be the opposite. This results in specific

are demonstrated on three use‐cases in Section 5.

point of view and a more formal description of the technique.

schemes, even before global accuracy is achieved [6].

**2. Surrogate modelling**

**2.1. Goals and usage scenarios**

tion of simulations or visualization.

The introduction of computer simulations caused revolution: by bundling implementations of material, mechanical and physical properties into a software package, simulating the desired aspects of a system and performing the tests and experiments virtually; the num‐ ber of required prototypes can be drastically reduced to only a few at the very end of the design process. These prototypes can be regarded purely as a validation of the simulations. The simulation itself can be interpreted as a *model* and serves as an abstract layer between the engineer and the real world. Performing a virtual experiment is faster and is very inexpensive. A direct consequence was an acceleration of development and system design, contributing to a shorter time‐to‐market and a more effective process in general. In addition, it was also pos‐ sible to perform more virtual experiments, providing a way to achieve better products and design optimality.

However, as simulation software became more precise and gained accuracy over the years, its computational cost grew tremendously. In fact, the growth of computational cost was so fast it has beaten the growth in computational power resulting in very lengthy simulations on state‐of‐the‐art machines and high performance computing environments, mainly due to the never ending drive for finer time scales, more detail and general algorithmic complexity. For instance, a computational fluid dynamics (CFD) simulation of a cooling system can take several days to complete [1], or a simulation of a single crash test was reported to take up to 36 hours to complete [2]. This introduces a new problem: large‐scale parameter sweeping and direct use of this type of *computationally expensive* simulations for evaluation intensive tasks such as optimization and sensitivity analysis are impractical and should be avoided.

To counter this enormous growth in computational cost however, an additional layer of abstraction between the complex system in the real world and the engineer was proposed, more specifically between the simulation and the engineer. Rather than interacting directly with the simulator, a cheaper approximation is constructed. Roughly three approaches to obtain this approximation exist:


In this chapter, the focus is on a data‐driven approach: the response of a simulator as a func‐ tion of its inputs is mimicked by *surrogate models* (metamodels, emulators or response surface models are often encountered as synonyms). These cheap‐to‐evaluate mathematical expres‐ sions can be evaluated efficiently and can replace the simulator. A more extensive overview of usage scenarios and a formal description of surrogate modelling are given in Section 2. Before the surrogate model can be used, it must first be constructed and trained during the surrogate modelling process on a number of well‐chosen evaluations (*samples*) to be evaluated by the simulator in order to satisfy the requirements specified upfront. The problem of selecting an appropriate set of samples is further explored in Section 3. Section 4 briefly introduces an integrated platform for surrogate modelling with sequential design. Finally, these techniques are demonstrated on three use‐cases in Section 5.

## **2. Surrogate modelling**

learnt are used to improve the design, back at the drawing table. Therefore, the development process used involves building several prototypes in order to gain more confidence in the solu‐ tions. A direct consequence of this approach is that the development process is both slow and

The introduction of computer simulations caused revolution: by bundling implementations of material, mechanical and physical properties into a software package, simulating the desired aspects of a system and performing the tests and experiments virtually; the num‐ ber of required prototypes can be drastically reduced to only a few at the very end of the design process. These prototypes can be regarded purely as a validation of the simulations. The simulation itself can be interpreted as a *model* and serves as an abstract layer between the engineer and the real world. Performing a virtual experiment is faster and is very inexpensive. A direct consequence was an acceleration of development and system design, contributing to a shorter time‐to‐market and a more effective process in general. In addition, it was also pos‐ sible to perform more virtual experiments, providing a way to achieve better products and

However, as simulation software became more precise and gained accuracy over the years, its computational cost grew tremendously. In fact, the growth of computational cost was so fast it has beaten the growth in computational power resulting in very lengthy simulations on state‐of‐the‐art machines and high performance computing environments, mainly due to the never ending drive for finer time scales, more detail and general algorithmic complexity. For instance, a computational fluid dynamics (CFD) simulation of a cooling system can take several days to complete [1], or a simulation of a single crash test was reported to take up to 36 hours to complete [2]. This introduces a new problem: large‐scale parameter sweeping and direct use of this type of *computationally expensive* simulations for evaluation intensive tasks

such as optimization and sensitivity analysis are impractical and should be avoided.

To counter this enormous growth in computational cost however, an additional layer of abstraction between the complex system in the real world and the engineer was proposed, more specifically between the simulation and the engineer. Rather than interacting directly with the simulator, a cheaper approximation is constructed. Roughly three approaches to

• **Model driven** (known as model order reduction) takes a top‐down approach by applying mathematic techniques to derive approximations directly from the original simulator. This, however, exploits information about the application domain and is therefore problem specific.

• **Data driven** assumes absolutely nothing is known about the inner workings of the simu‐ lator. It is assumed to be a *black‐box* and information about the response is collected from evaluations. From these data, an approximation is derived. Because this approach is very

• **Hybrid** is the overlap zone between both model driven and data driven. Attempts to incor‐ porate domain‐specific knowledge into a data‐driven process to obtain better traceability

not cost effective.

174 Computer Simulation

design optimality.

obtain this approximation exist:

general, it is not bound to a specific domain.

and reach accuracy with less evaluations.

The introduction of this chapter highlighted the global idea of approximation and the benefits of introducing an additional layer of abstraction when simulations are expensive. Now, the data‐driven approach (surrogate modelling) is discussed more in depth, both from a usability point of view and a more formal description of the technique.

## **2.1. Goals and usage scenarios**

The most direct implementation of surrogate modelling is training a globally accurate model over the entire design space. This approximation can then replace the expensive simulation evaluations for a variety of engineering tasks such as design space exploration, parameteriza‐ tion of simulations or visualization.

A different use‐case of surrogate models is sensitivity analysis of the complex system. Especially when many input parameters are present, it is very difficult to achieve global accuracy due to the exponential growth of the input space (known as the *curse of dimensional‐ ity*). Fortunately, not all input parameters contribute equally to the output variability, in fact some might not have any impact at all [3]. The surrogate models can be used directly for evaluation‐based sensitivity analysis methods such as Sobol indices [4], Interaction indices [5] or gradient‐based methods. For some kernel‐based modelling methods, analytical com‐ putation of sensitivity measures is possibly resulting in faster and more reliable estimation schemes, even before global accuracy is achieved [6].

Another task at which surrogates excel is the optimization of expensive objective functions. This discipline is often referred to as surrogate‐based optimization (SBO). A globally accurate surrogate model can be built and optimized using traditional optimization methods such as gradient descent, or metaheuristics such as particle swarm optimization [7]. Although this approach is correct and works faster than simulating each call of the objective function, it is not necessarily the most efficient methodology: when seeking a minimum, less samples can be devoted to the regions that are clearly shown to be the opposite. This results in specific methodologies, which explore the search space for optima and exploit the available knowledge to refine optima. This is a difficult trade‐off and will be discussed in detail in Section 3.2.

For instance, a straightforward approach is minimizing the error between the surrogate model response and the true responses for the samples used to train the surrogate. This is often referred to as *training error* or *sample error* and pushes the hyperparameter optimiza‐ tion to favour complex models interpolating the data points perfectly. Although this solution might be considered satisfactory at first sight, in reality this rarely provides a good model as the optimization problems do not consider model quality at other, unobserved samples in the design space. This approach inevitably leads to very unreliable responses when these unobserved samples are to be predicted, hence the model is said to be *overfitting* or to have poor *generalization performance*. Popular quality estimators accounting for generalization per‐

Surrogate Modelling with Sequential Design for Expensive Simulation Applications

http://dx.doi.org/10.5772/67739

177

A typical requirement for the surrogate model is to be *accurate*, hence the objective of the mod‐ elling process is the ability to obtain accuracy with only a small number of (expensive) simula‐ tor evaluations. A different kind of requirement is optimizing the response of the simulator. This requires fast discovery of promising regions and fast exploitation thereof to identify the (global) optimum. Hence, a different strategy for selecting the samples is required as the accu‐ racy of the surrogate in non‐optimal is of lesser importance. A refined set of model require‐ ments and the goal of the process are required, as they greatly affect the choice of samples to

The traditional approaches to generate an experimental design are the *one‐shot* designs. Prior to any evaluation, all samples are selected in a space‐filling manner: at this point, no fur‐ ther information is available due to the black‐box assumption on the simulator itself (as part of the data‐driven approach). Therefore, the information density should be approxi‐ mately equal over the entire design space and the samples are to be distributed uniformly. To this end, several approaches related to *Design of Experiments* (DoEs) have been devel‐ oped. However, only the space‐filling aspect has an impact within in the context of com‐ puter experiments, as other criteria such as blocking and replication lose their relevance [11]. This led to the transition and extension of these existing statistical methods to computer experiments [11, 12]. Widely applied are the factorial designs (grid‐based) [13] and optimal (maximin) Latin Hypercube designs (LHDs) [14]. Both are illustrated in **Figures 1** and **2**, respectively, for a two‐dimensional input space and 16 samples. An LHD avoids collapsing points should the input space be projected into a lower dimensional space. Other approaches include maximin and minimax designs, Box and Behken [15], central composite designs [16]

Despite their widespread usage, these standard approaches to generate experimental designs come with a number of disadvantages. First and foremost: the most qualitative designs (with the best space‐filling properties) can be extremely complex to generate (especially for problems with a high‐dimensional input space) due to their geometric properties. For instance, gener‐

be evaluated. The choice of samples is referred to as the experimental design.

formance include crossvalidation and validation sets.

**3. Experimental design**

**3.1. One‐shot design**

and (quasi‐)Monte Carlo methods [17–19].

All tasks described so far were forward tasks, mapping samples from a design space to the output or objective space. It is also possible to do the opposite: this is referred to as inverse surrogate modelling. This can be interpreted as identifying the areas of the design space cor‐ responding to a certain desired or feasible output range. Typical approaches involve training a (forward) surrogate model first, then optimizing the model using an error function between the output and the desired output as objective function. This optimization is often preferred to be a robust optimization to account for the error of the forward surrogate model [8]. Specific sampling schemes to identify these regions directly were also proposed [9]. Finally, it is also possible to translate the inverse problem into a forward problem involving discretizing the output (feasible/infeasible point) and learning the class boundaries.

Because the concept of surrogate modelling is both flexible as well as generic, allowing several modifications tailored for the task at hand, it has been applied in wide range of fields includ‐ ing metallurgy, economics, operations research, robotics, electronics, physics, automotive, biology, geology, etc.

### **2.2. Formalism**

Formally, the surrogate modelling process can be described as follows. Given an expensive function *f* and a collection of data samples with corresponding evaluations represented by *D*, we seek to find an approximation function *f* ˜:

$$\begin{aligned} \text{function } f \text{ and a collection of data samples with corresponding evaluations represented by } D, \\ \text{we seek to find an approximation function } \tilde{f};\\ \text{arg}\max\_{t\in\mathbb{T}} \arg\min\_{\theta\in\Theta} -\Lambda\left(\kappa, \left.\tilde{f}\_{t,t'}D\right) \\ \text{subject to } \Lambda\left(\kappa, \left.\tilde{f}\_{t,t'}D\right) \le \tau. \end{aligned} \tag{1}$$

It is clear that the selection of the approximation function is a complex interaction of sev‐ eral aspects, more specifically: *κ* represents an error function such as the popular root‐mean‐ square error (RMSE), *τ* the target value for the quality as expressed by the error function, all under the operation of the model quality estimator Λ. The quality estimator drives the optimi‐ zation of both the model type *t* out of the set of available model types *T* and its hyperparam‐ eters *θ*. Typical examples of surrogate model types are Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Radial Basis Functions (RBFs), rational and polynomial models, Kriging and Gaussian Process (GP) models. Examples of hyperparameter optimiza‐ tion include tuning kernel parameters or regularization constants or identifying the optimal order of a polynomial or the most appropriate architecture for a neural network.

The choice of *Λ* is therefore crucial to obtain a satisfying surrogate model at the end of the process as it is the metric driving the search for *θ*. This aspect is often overlooked at first, requiring several iterations of the process to obtain satisfactory results. Consulting the users of the surrogate model and defining what is expected from the model and what is not, is a good starting point. These requirements can then be formally translated into a good qual‐ ity estimator. Unfortunately, defining *Λ* does not end with casting user requirements. Often, hyperparameters have a direct impact on the model complexity or penalization thereof, thus tuning them is tricky due to the bias‐variance trade‐off [10].

For instance, a straightforward approach is minimizing the error between the surrogate model response and the true responses for the samples used to train the surrogate. This is often referred to as *training error* or *sample error* and pushes the hyperparameter optimiza‐ tion to favour complex models interpolating the data points perfectly. Although this solution might be considered satisfactory at first sight, in reality this rarely provides a good model as the optimization problems do not consider model quality at other, unobserved samples in the design space. This approach inevitably leads to very unreliable responses when these unobserved samples are to be predicted, hence the model is said to be *overfitting* or to have poor *generalization performance*. Popular quality estimators accounting for generalization per‐ formance include crossvalidation and validation sets.

## **3. Experimental design**

methodologies, which explore the search space for optima and exploit the available knowledge to refine optima. This is a difficult trade‐off and will be discussed in detail in Section 3.2.

All tasks described so far were forward tasks, mapping samples from a design space to the output or objective space. It is also possible to do the opposite: this is referred to as inverse surrogate modelling. This can be interpreted as identifying the areas of the design space cor‐ responding to a certain desired or feasible output range. Typical approaches involve training a (forward) surrogate model first, then optimizing the model using an error function between the output and the desired output as objective function. This optimization is often preferred to be a robust optimization to account for the error of the forward surrogate model [8]. Specific sampling schemes to identify these regions directly were also proposed [9]. Finally, it is also possible to translate the inverse problem into a forward problem involving discretizing the

Because the concept of surrogate modelling is both flexible as well as generic, allowing several modifications tailored for the task at hand, it has been applied in wide range of fields includ‐ ing metallurgy, economics, operations research, robotics, electronics, physics, automotive,

Formally, the surrogate modelling process can be described as follows. Given an expensive function *f* and a collection of data samples with corresponding evaluations represented by *D*,

subject to <sup>Λ</sup>(*κ*, *<sup>f</sup>*

˜

It is clear that the selection of the approximation function is a complex interaction of sev‐ eral aspects, more specifically: *κ* represents an error function such as the popular root‐mean‐ square error (RMSE), *τ* the target value for the quality as expressed by the error function, all under the operation of the model quality estimator Λ. The quality estimator drives the optimi‐ zation of both the model type *t* out of the set of available model types *T* and its hyperparam‐ eters *θ*. Typical examples of surrogate model types are Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Radial Basis Functions (RBFs), rational and polynomial models, Kriging and Gaussian Process (GP) models. Examples of hyperparameter optimiza‐ tion include tuning kernel parameters or regularization constants or identifying the optimal

The choice of *Λ* is therefore crucial to obtain a satisfying surrogate model at the end of the process as it is the metric driving the search for *θ*. This aspect is often overlooked at first, requiring several iterations of the process to obtain satisfactory results. Consulting the users of the surrogate model and defining what is expected from the model and what is not, is a good starting point. These requirements can then be formally translated into a good qual‐ ity estimator. Unfortunately, defining *Λ* does not end with casting user requirements. Often, hyperparameters have a direct impact on the model complexity or penalization thereof, thus

order of a polynomial or the most appropriate architecture for a neural network.

˜ *<sup>t</sup>*,*<sup>ϑ</sup>*, *D*)

*<sup>t</sup>*,*<sup>ϑ</sup>*, *<sup>D</sup>*) <sup>≤</sup> *<sup>τ</sup>*. (1)

˜:

output (feasible/infeasible point) and learning the class boundaries.

biology, geology, etc.

we seek to find an approximation function *f*

arg max*<sup>t</sup>*∈*<sup>T</sup>* arg min*<sup>θ</sup>*∈<sup>Θ</sup> <sup>−</sup> <sup>Λ</sup>(*κ*, *<sup>f</sup>*

tuning them is tricky due to the bias‐variance trade‐off [10].

**2.2. Formalism**

176 Computer Simulation

A typical requirement for the surrogate model is to be *accurate*, hence the objective of the mod‐ elling process is the ability to obtain accuracy with only a small number of (expensive) simula‐ tor evaluations. A different kind of requirement is optimizing the response of the simulator. This requires fast discovery of promising regions and fast exploitation thereof to identify the (global) optimum. Hence, a different strategy for selecting the samples is required as the accu‐ racy of the surrogate in non‐optimal is of lesser importance. A refined set of model require‐ ments and the goal of the process are required, as they greatly affect the choice of samples to be evaluated. The choice of samples is referred to as the experimental design.

### **3.1. One‐shot design**

The traditional approaches to generate an experimental design are the *one‐shot* designs. Prior to any evaluation, all samples are selected in a space‐filling manner: at this point, no fur‐ ther information is available due to the black‐box assumption on the simulator itself (as part of the data‐driven approach). Therefore, the information density should be approxi‐ mately equal over the entire design space and the samples are to be distributed uniformly. To this end, several approaches related to *Design of Experiments* (DoEs) have been devel‐ oped. However, only the space‐filling aspect has an impact within in the context of com‐ puter experiments, as other criteria such as blocking and replication lose their relevance [11]. This led to the transition and extension of these existing statistical methods to computer experiments [11, 12]. Widely applied are the factorial designs (grid‐based) [13] and optimal (maximin) Latin Hypercube designs (LHDs) [14]. Both are illustrated in **Figures 1** and **2**, respectively, for a two‐dimensional input space and 16 samples. An LHD avoids collapsing points should the input space be projected into a lower dimensional space. Other approaches include maximin and minimax designs, Box and Behken [15], central composite designs [16] and (quasi‐)Monte Carlo methods [17–19].

Despite their widespread usage, these standard approaches to generate experimental designs come with a number of disadvantages. First and foremost: the most qualitative designs (with the best space‐filling properties) can be extremely complex to generate (especially for problems with a high‐dimensional input space) due to their geometric properties. For instance, gener‐

For some other design methodologies, it is not possible to generate them for an arbitrary size. Given the expensive nature of each evaluation, this can result in an unacceptable growth of required simulation time. Factorial designs for instance always have size *k <sup>d</sup>* with level *k* and dimension *d*, making them infeasible choises for problems with many input parameters.

Surrogate Modelling with Sequential Design for Expensive Simulation Applications

http://dx.doi.org/10.5772/67739

179

Another disadvantage of one‐shot methodologies is the arbitrary choice of size of the design. The choice should depend entirely on the nature of the problem (i.e. larger design spaces with more complex behaviour require more evaluations). However, this information is unavailable at the time the design is generated. Hence, one‐shot approaches risk selecting too few data points resulting in an underfitted model, or selecting too much data points causing loss of

As a solution, sequential design was adopted [21]. This methodology starts from a very small one‐shot design to initiate the process. After evaluation of these samples a model is built, and a loop is initiated which is only exited when either one of the specified stopping criteria is met. Within the loop, an *adaptive sampling* algorithm is run to select additional data points for evaluation which are used to update the model. **Figure 3** displays this process graphically.

This approach has a number of advantages. First of all, constraints on the surrogate modelling process can be explicitly imposed through the stopping criteria. Typical criteria include how well the model satisfied the model requirements, a maximum number of allowed evaluations, or a maximum runtime. Second, the *adaptive sampling* method can be designed to select new data points specifically in terms of the requirements. Sampling to obtain a globally accurate model will differ from sampling to discover class boundaries or sampling to obtain optima. These choices can also be guided by all information available about the input‐output behaviour: when*n* samples have been selected, a history of intermediate models and all simulator responses is avail‐

time and computational resources.

**3.2. Sequential experimental design**

**Figure 3.** Surrogate modelling with sequential design.

**Figure 1.** Two‐dimensional factorial design with four levels per dimension.

**Figure 2.** Two‐dimensional optimal maximin Latin Hybercube design of size 16.

ating an LHD with optimal maximin distance is very time‐consuming process. In fact, the generation of an optimal LHD is almost a field of its own, with several different methods for faster and reliable generation [14, 20]. Fortunately, once a design is generated, it can be reused. For some other design methodologies, it is not possible to generate them for an arbitrary size. Given the expensive nature of each evaluation, this can result in an unacceptable growth of required simulation time. Factorial designs for instance always have size *k <sup>d</sup>* with level *k* and dimension *d*, making them infeasible choises for problems with many input parameters.

Another disadvantage of one‐shot methodologies is the arbitrary choice of size of the design. The choice should depend entirely on the nature of the problem (i.e. larger design spaces with more complex behaviour require more evaluations). However, this information is unavailable at the time the design is generated. Hence, one‐shot approaches risk selecting too few data points resulting in an underfitted model, or selecting too much data points causing loss of time and computational resources.

## **3.2. Sequential experimental design**

As a solution, sequential design was adopted [21]. This methodology starts from a very small one‐shot design to initiate the process. After evaluation of these samples a model is built, and a loop is initiated which is only exited when either one of the specified stopping criteria is met. Within the loop, an *adaptive sampling* algorithm is run to select additional data points for evaluation which are used to update the model. **Figure 3** displays this process graphically.

This approach has a number of advantages. First of all, constraints on the surrogate modelling process can be explicitly imposed through the stopping criteria. Typical criteria include how well the model satisfied the model requirements, a maximum number of allowed evaluations, or a maximum runtime. Second, the *adaptive sampling* method can be designed to select new data points specifically in terms of the requirements. Sampling to obtain a globally accurate model will differ from sampling to discover class boundaries or sampling to obtain optima. These choices can also be guided by all information available about the input‐output behaviour: when*n* samples have been selected, a history of intermediate models and all simulator responses is avail‐

**Figure 3.** Surrogate modelling with sequential design.

ating an LHD with optimal maximin distance is very time‐consuming process. In fact, the generation of an optimal LHD is almost a field of its own, with several different methods for faster and reliable generation [14, 20]. Fortunately, once a design is generated, it can be reused.

**Figure 1.** Two‐dimensional factorial design with four levels per dimension.

178 Computer Simulation

**Figure 2.** Two‐dimensional optimal maximin Latin Hybercube design of size 16.

able to guide the selection of new samples. Because of the information available, this selection no longer has to be purely based on a black‐box approach, and information can be exploited.

methodologies can be applied, depending on the specific tasks. For single‐objective opti‐ mization examples of such sampling methods include CORS [29] and Bayesian optimi‐ zation acquisition functions such as Expected Improvement [30] (combined with Kriging models this corresponds to the well‐known Efficient Global Optimization approach [31]), the Knowledge Gradient [32] and Predictive Entropy Search [33]. Many of these methods for optimization can also be used in combination with a method, which learns about the feasibility of input regions of the design space: during the iterative process an additional model learns the feasibility from the samples (as reflected by the simulation thereof). This information is then used during the selection of new samples with specific criteria such as

Surrogate Modelling with Sequential Design for Expensive Simulation Applications

http://dx.doi.org/10.5772/67739

181

Surrogate‐based optimization has also been extended to problems with two or more (poten‐ tially conflicting) objectives. The goal of this type of multi‐objective optimization is the identification of a Pareto front of solutions, which presents the trade‐off between these objec‐ tives. Existing approaches include hypervolume‐based methods such as the Hypervolume Probability of Improvement (HvPoI) [35, 36], the Hypervolume Expected Improvement [37]

Designed as a research platform for sequential sampling and adaptive modelling using MATLAB, the SUMO Toolbox [39, 40] has grown into a mature design tool for surrogate mod‐ elling, offering a large variety of algorithms for approximation of simulators with continuous and discrete output. The software design is fully object oriented allowing high‐extensibility of its capabilities. By default, the platform follows the integrated modelling flow with sequential design but can also be configured to approximate data sets, use a one‐shot design, etc.

The design goals of the SUMO Toolbox support approximation of expensive computer simu‐ lations of complex black‐box systems with several design parameters by cheap‐to‐evaluate models, both in a regression and a classification context. To obtain these goals, the SUMO Toolbox offers sequential sampling and adaptive modelling in a highly configurable environ‐ ment, which is easy to extend due to the microkernel design. Distributed computing support for evaluations of data points is also available, as well as multi‐threading to support the usage of multi‐core architectures for regression modelling and classification. Many different plugins are available for each of the different sub‐problems, including many of the algorithms and

The behaviour of each software component is configurable through a central XML file, and components can easily be added, removed or replaced by custom implementations. The SUMO Toolbox is free for academic use and is available for download at http://sumo.intec. ugent.be. It can be installed on any platform supported by MATLAB. In addition, a link can be found to the available documentation and tutorials to install and configure the toolbox

the Probability of Feasibility (PoF) [34].

**4. SUMO Toolbox**

methods mentioned in this chapter.

including some of its more advanced features.

or multi‐objective Predictive Entropy Search [38].

Roughly, all methods for adaptive sampling are based on any of the following criteria (dis‐ cussed more in detail below):


Depending on the goal and model requirements, a strategy can be designed involving a com‐ plex combination of these criteria. Fundamentally, each approach will involve two competing objectives:


It is clear, however, that a good strategy strikes a balance between these goals as both are required to obtain satisfactory results.

Exploration‐based algorithms are typically less involved with the goal of the process. They are crucial to assure that no relevant parts of the response surface are completely missed. Roughly the space‐filling and model uncertainty criteria focus mostly on exploration. Space‐ filling sequential experimental design usually involves distance to neighbouring points, e.g. the maximin/minimax criteria, potentially complemented with projective properties [22]. Model uncertainty is either explicitly available, or must be derived somehow. Bayesian model types represent the former type of models, e.g. the prediction variance of Kriging and Gaussian Process models, which can be applied directly for maximum variance sampling [23, 24] or maximum entropy designs [25]. For these kinds of models, a better way is express‐ ing the uncertainty on the model hyperparameters resulting in approaches to reduce this uncertainty and hence, enhancing the overall model confidence [26]. Model uncertainty can also be derived by training several models and comparing their responses. Areas with most disagreements are then marked for additional samples. This can be a very effective approach in combination with ensemble modelling.

On the other hand, exploitation methods clearly pursue the goal of the process. In case a global accurate model is required, a very effective approach is raising the information density in regions with non‐linear response behaviour (e.g. LOLA‐Voronoi [27], FLOLA‐ Voronoi [28]). The latter approach does not even require intermediate models as it oper‐ ates on local linear interpolations. For optimization purposes, specific adaptive sampling methodologies can be applied, depending on the specific tasks. For single‐objective opti‐ mization examples of such sampling methods include CORS [29] and Bayesian optimi‐ zation acquisition functions such as Expected Improvement [30] (combined with Kriging models this corresponds to the well‐known Efficient Global Optimization approach [31]), the Knowledge Gradient [32] and Predictive Entropy Search [33]. Many of these methods for optimization can also be used in combination with a method, which learns about the feasibility of input regions of the design space: during the iterative process an additional model learns the feasibility from the samples (as reflected by the simulation thereof). This information is then used during the selection of new samples with specific criteria such as the Probability of Feasibility (PoF) [34].

Surrogate‐based optimization has also been extended to problems with two or more (poten‐ tially conflicting) objectives. The goal of this type of multi‐objective optimization is the identification of a Pareto front of solutions, which presents the trade‐off between these objec‐ tives. Existing approaches include hypervolume‐based methods such as the Hypervolume Probability of Improvement (HvPoI) [35, 36], the Hypervolume Expected Improvement [37] or multi‐objective Predictive Entropy Search [38].

## **4. SUMO Toolbox**

able to guide the selection of new samples. Because of the information available, this selection no longer has to be purely based on a black‐box approach, and information can be exploited.

Roughly, all methods for adaptive sampling are based on any of the following criteria (dis‐

Depending on the goal and model requirements, a strategy can be designed involving a com‐ plex combination of these criteria. Fundamentally, each approach will involve two competing

**1. Exploration**: sampling regions of the design space where proportionally only little infor‐

It is clear, however, that a good strategy strikes a balance between these goals as both are

Exploration‐based algorithms are typically less involved with the goal of the process. They are crucial to assure that no relevant parts of the response surface are completely missed. Roughly the space‐filling and model uncertainty criteria focus mostly on exploration. Space‐ filling sequential experimental design usually involves distance to neighbouring points, e.g. the maximin/minimax criteria, potentially complemented with projective properties [22]. Model uncertainty is either explicitly available, or must be derived somehow. Bayesian model types represent the former type of models, e.g. the prediction variance of Kriging and Gaussian Process models, which can be applied directly for maximum variance sampling [23, 24] or maximum entropy designs [25]. For these kinds of models, a better way is express‐ ing the uncertainty on the model hyperparameters resulting in approaches to reduce this uncertainty and hence, enhancing the overall model confidence [26]. Model uncertainty can also be derived by training several models and comparing their responses. Areas with most disagreements are then marked for additional samples. This can be a very effective approach

On the other hand, exploitation methods clearly pursue the goal of the process. In case a global accurate model is required, a very effective approach is raising the information density in regions with non‐linear response behaviour (e.g. LOLA‐Voronoi [27], FLOLA‐ Voronoi [28]). The latter approach does not even require intermediate models as it oper‐ ates on local linear interpolations. For optimization purposes, specific adaptive sampling

**2. Exploitation**: sampling promising (w.r.t to the goal) regions of the design space.

cussed more in detail below):

• Identification of optima

• Non‐linearity of the response

mation has been acquired.

required to obtain satisfactory results.

in combination with ensemble modelling.

• Model uncertainty

objectives:

180 Computer Simulation

• Distance to neighbouring points (space‐filling designs)

• Feasibility of the candidate point w.r.t. constraints

Designed as a research platform for sequential sampling and adaptive modelling using MATLAB, the SUMO Toolbox [39, 40] has grown into a mature design tool for surrogate mod‐ elling, offering a large variety of algorithms for approximation of simulators with continuous and discrete output. The software design is fully object oriented allowing high‐extensibility of its capabilities. By default, the platform follows the integrated modelling flow with sequential design but can also be configured to approximate data sets, use a one‐shot design, etc.

The design goals of the SUMO Toolbox support approximation of expensive computer simu‐ lations of complex black‐box systems with several design parameters by cheap‐to‐evaluate models, both in a regression and a classification context. To obtain these goals, the SUMO Toolbox offers sequential sampling and adaptive modelling in a highly configurable environ‐ ment, which is easy to extend due to the microkernel design. Distributed computing support for evaluations of data points is also available, as well as multi‐threading to support the usage of multi‐core architectures for regression modelling and classification. Many different plugins are available for each of the different sub‐problems, including many of the algorithms and methods mentioned in this chapter.

The behaviour of each software component is configurable through a central XML file, and components can easily be added, removed or replaced by custom implementations. The SUMO Toolbox is free for academic use and is available for download at http://sumo.intec. ugent.be. It can be installed on any platform supported by MATLAB. In addition, a link can be found to the available documentation and tutorials to install and configure the toolbox including some of its more advanced features.

## **5. Illustrations**

To illustrate the flexibility of the surrogate modelling framework with sequential design, some example cases are considered. The SUMO Toolbox was used for each case.

#### **5.1. Low‐noise amplifier**

This test case consists of a real world problem from electronics. A low‐noise amplifier (LNA), a simple radio frequency circuit, is the typical first stage of a receiver, providing the gain to sup‐ press noise of subsequent stages. The performance of an LNA can be determined by means of computer simulations where the underlying physical behaviour is taken into account. For this experiment, we chose to model the input noise‐current, in function of two (normalized) parameters: the inductance and the MOSFET width. The response to the inputs for this test case is smooth with a steep ridge in the middle. This type of strong non‐linear behaviour is difficult to approximate.

The model type for this problem is an ANN, trained with Levenberg‐Marquard backpropa‐ gation with Bayesian regularization (300 epochs). The network topology and initial weights are optimized by a genetic algorithm, with a maximum of two layers. The process initiates by combining a 10‐point LHD with a two‐level factorial design (the corner points). As adaptive sampling methodology, the FLOLA‐Voronoi algorithm was chosen to select a single‐point iteration. Once the steep ridge has been discovered, the information density in this area will be increased. As model quality estimator, crossvalidation was used, with the root relative square error (RRSE) function:

$$\text{RRSE}(\mathbf{x}, \tilde{\mathbf{x}}) = \sqrt{\frac{\sum\_{i=1}^{n} (\mathbf{x}\_{i} - \overline{\mathbf{x}}\_{i})^{2}}{\sum\_{i=1}^{n} (\mathbf{x}\_{i} - \mathbf{x})^{2}}}. \tag{2}$$

**Figure 4.** LNA: final surrogate model for the LNA illustration. The sharp peak is clearly present.

Surrogate Modelling with Sequential Design for Expensive Simulation Applications

http://dx.doi.org/10.5772/67739

183

**Figure 5.** LNA: sample distribution as constructed sequentially by the FLOLA‐Voronoi adaptive sampling algorithm.

The stopping criterion was set to an RRSE score below 0.05. This was achieved after evaluat‐ ing a total of 51 samples. A plot of the model is shown in **Figure 4**. Additionally, the distribu‐ tion of the samples in the two‐dimensional input space is shown in **Figure 5**. The focus on the ridge can be clearly observed. In comparison, repeating the experiment in a one‐shot setting with an LHD of 51 points and keeping all other settings results in an RRSE score of only 0.11. This is mostly caused by an inadequate detection of the non‐linearity (only a few samples are near the non‐linearity), whereas a lot of samples are on the smoothly varying surfaces.

#### **5.2. Optimization: gas cyclone**

The next illustration is more involved and is a joint modelling process aiming both at design optimality as well as feasibility. The goal is to optimize the seven‐dimensional geometry of a gas cyclone. These components are widely used in air pollution control, gas‐solid separa‐ tion for aerosol sampling and industrial applications aiming to catch large particles such as vacuum cleaners. An illustration is given in **Figure 6**. In cyclone separators, a strongly swirl‐ ing turbulent flow is used to separate phases with different densities. A tangential inlet gen‐ erates a complex swirling motion of the gas stream, which forces particles toward the outer

Surrogate Modelling with Sequential Design for Expensive Simulation Applications http://dx.doi.org/10.5772/67739 183

**Figure 4.** LNA: final surrogate model for the LNA illustration. The sharp peak is clearly present.

**5. Illustrations**

182 Computer Simulation

**5.1. Low‐noise amplifier**

difficult to approximate.

square error (RRSE) function:

**5.2. Optimization: gas cyclone**

RRSE(*x*, *<sup>x</sup>*˜) <sup>=</sup> <sup>√</sup>

To illustrate the flexibility of the surrogate modelling framework with sequential design,

This test case consists of a real world problem from electronics. A low‐noise amplifier (LNA), a simple radio frequency circuit, is the typical first stage of a receiver, providing the gain to sup‐ press noise of subsequent stages. The performance of an LNA can be determined by means of computer simulations where the underlying physical behaviour is taken into account. For this experiment, we chose to model the input noise‐current, in function of two (normalized) parameters: the inductance and the MOSFET width. The response to the inputs for this test case is smooth with a steep ridge in the middle. This type of strong non‐linear behaviour is

The model type for this problem is an ANN, trained with Levenberg‐Marquard backpropa‐ gation with Bayesian regularization (300 epochs). The network topology and initial weights are optimized by a genetic algorithm, with a maximum of two layers. The process initiates by combining a 10‐point LHD with a two‐level factorial design (the corner points). As adaptive sampling methodology, the FLOLA‐Voronoi algorithm was chosen to select a single‐point iteration. Once the steep ridge has been discovered, the information density in this area will be increased. As model quality estimator, crossvalidation was used, with the root relative

The stopping criterion was set to an RRSE score below 0.05. This was achieved after evaluat‐ ing a total of 51 samples. A plot of the model is shown in **Figure 4**. Additionally, the distribu‐ tion of the samples in the two‐dimensional input space is shown in **Figure 5**. The focus on the ridge can be clearly observed. In comparison, repeating the experiment in a one‐shot setting with an LHD of 51 points and keeping all other settings results in an RRSE score of only 0.11. This is mostly caused by an inadequate detection of the non‐linearity (only a few samples are near the non‐linearity), whereas a lot of samples are on the smoothly varying surfaces.

The next illustration is more involved and is a joint modelling process aiming both at design optimality as well as feasibility. The goal is to optimize the seven‐dimensional geometry of a gas cyclone. These components are widely used in air pollution control, gas‐solid separa‐ tion for aerosol sampling and industrial applications aiming to catch large particles such as vacuum cleaners. An illustration is given in **Figure 6**. In cyclone separators, a strongly swirl‐ ing turbulent flow is used to separate phases with different densities. A tangential inlet gen‐ erates a complex swirling motion of the gas stream, which forces particles toward the outer

\_\_\_\_\_\_\_\_\_\_ ∑*<sup>i</sup>*=1 *<sup>n</sup>* (*xi* − *x*˜*<sup>i</sup>* )<sup>2</sup> \_\_\_\_\_\_\_\_\_\_ ∑*<sup>i</sup>*=1

*<sup>n</sup>* (*xi* <sup>−</sup> ¯¯*<sup>x</sup>* )<sup>2</sup> . (2)

some example cases are considered. The SUMO Toolbox was used for each case.

**Figure 5.** LNA: sample distribution as constructed sequentially by the FLOLA‐Voronoi adaptive sampling algorithm.

Design optimality for the gas cyclone, however, is not represented by a unique and optimal number. In fact, it is represented by two different aspects: the pressure loss (expressed by the Euler number) and the cut‐off diameter, which is expressed by the Stokes number. Both aspects represent a trade‐off, and the proper scaling to sum both into a single objective is unknown. Hence, the correct way to proceed is to identify a set of Pareto optimal solutions representing the trade‐off inherent to this problem, rather than a single solution. Presented with this trade‐ off, the designer has to make the final decision on what the optimal design should be. The shape of the Pareto front is informative and of great value for the designer (w.r.t. robustness of the solution for example). For the optimization, the two outputs of a simulation corresponding

Surrogate Modelling with Sequential Design for Expensive Simulation Applications

In addition, geometry optimization usually involves constraints as some configurations are not feasible, or result in gas cyclones, which do not work according to specifications. In addi‐ tion to the Euler and Stokes objectives, the simulation of a sample also emits four binary values indicating if their corresponding constraint was satisfied or not (denoted as *c*<sup>1</sup>

). As each evaluation is computationally demanding, this additional knowledge should be included in order to maximize the probability of selecting feasible solutions. Each of the constraint outputs will therefore be approximated by a probabilistic SVM. For selecting new samples, both the Pareto optimality and the feasibility need to be considered. To this end, the HvPoI and PoF criteria are used, respectively. The PoF score is not computed explicitly (as it would be for a Gaussian Process) but interpreted as the SVM probability for the class repre‐

> {*c*<sup>1</sup> ,*c*2 ,*c*3 ,*c*4} PoF*<sup>c</sup>*

For each iteration of the sequential design, this criterion is optimized resulting in a new sample maximizing the probability of a more feasible and more optimal solution. To start, an LHD is constructed in seven dimensions. The kernel bandwidth and regularization con‐ stant hyperparameters for the SVMs are optimized with the DIRECT optimization algo‐

function. The hyperparameters for the Kriging models are optimized with maximum‐likeli‐ hood estimation. The sampling criterion is first optimized randomly with a dense set of ran‐ dom points, then the best solution serves as a starting point for applying gradient descent locally to refine the solution. The stopping criterion was set to a maximum of 120 evaluated

**Figure 7** shows the scores for all evaluated samples on both objectives. The bullets and squares represent the samples forming the Pareto front. The black‐box constraints were learned as the optimization was proceeding, hence many evaluated samples do not satisfy the constraints (as this was unknown at that time): 8% of the evaluated samples however satisfy the constraints. Fortunately, four of them are Pareto optimal and represent valid optimal configurations. The exact optimal Pareto front was unknown upfront: in order to provide a comparison and verify the integrity of the identified solutions the traditional NSGA‐II [43] multi‐objective optimiza‐ tion algorithm was applied directly on the CFD simulations for a total of 10,000 evaluations. Clearly, the Pareto optimal solutions found by the surrogate‐based approach form a similar

, *c*<sup>2</sup> , *c*<sup>3</sup>

http://dx.doi.org/10.5772/67739

185

(*x* ) (3)

‐score of the positive class as error

to these objectives are approximated with the built‐in Kriging models [22].

senting feasible samples. This results in the following joint criterion:

*α*(*x*) = HvPoI(*x*) ∏

rithm [42], using crossvalidation with the popular *F*<sup>1</sup>

and *c*<sup>4</sup>

data points.

**Figure 6.** Cyclone: illustration of a gas cyclone.

wall where they spiral in the downward direction. Eventually, the particles are collected in the dustbin (or flow out through a dipleg) located at the bottom of the conical section of the cyclone body. The cleaned gas leaves through the exit pipe at the top. The cyclone geometry [41] is described by seven geometrical parameters: the inlet height *a*, width *b*, the vortex finder diameter *Dx* , and length *S*, cylinder height *h*, cyclone total height *Ht* and cone‐tip diameter *Bc* . Modifying these parameters has an impact on the performance and behaviour of the gas cyclone itself.

Design optimality for the gas cyclone, however, is not represented by a unique and optimal number. In fact, it is represented by two different aspects: the pressure loss (expressed by the Euler number) and the cut‐off diameter, which is expressed by the Stokes number. Both aspects represent a trade‐off, and the proper scaling to sum both into a single objective is unknown. Hence, the correct way to proceed is to identify a set of Pareto optimal solutions representing the trade‐off inherent to this problem, rather than a single solution. Presented with this trade‐ off, the designer has to make the final decision on what the optimal design should be. The shape of the Pareto front is informative and of great value for the designer (w.r.t. robustness of the solution for example). For the optimization, the two outputs of a simulation corresponding to these objectives are approximated with the built‐in Kriging models [22].

In addition, geometry optimization usually involves constraints as some configurations are not feasible, or result in gas cyclones, which do not work according to specifications. In addi‐ tion to the Euler and Stokes objectives, the simulation of a sample also emits four binary values indicating if their corresponding constraint was satisfied or not (denoted as *c*<sup>1</sup> , *c*<sup>2</sup> , *c*<sup>3</sup> and *c*<sup>4</sup> ). As each evaluation is computationally demanding, this additional knowledge should be included in order to maximize the probability of selecting feasible solutions. Each of the constraint outputs will therefore be approximated by a probabilistic SVM. For selecting new samples, both the Pareto optimality and the feasibility need to be considered. To this end, the HvPoI and PoF criteria are used, respectively. The PoF score is not computed explicitly (as it would be for a Gaussian Process) but interpreted as the SVM probability for the class repre‐ senting feasible samples. This results in the following joint criterion:

$$a(\mathbf{x}) = \text{HvPoI}(\mathbf{x}) \prod\_{\{c\_i c\_i c\_i c\_i\}} \text{PoF}\_c(\mathbf{x}) \tag{3}$$

For each iteration of the sequential design, this criterion is optimized resulting in a new sample maximizing the probability of a more feasible and more optimal solution. To start, an LHD is constructed in seven dimensions. The kernel bandwidth and regularization con‐ stant hyperparameters for the SVMs are optimized with the DIRECT optimization algo‐ rithm [42], using crossvalidation with the popular *F*<sup>1</sup> ‐score of the positive class as error function. The hyperparameters for the Kriging models are optimized with maximum‐likeli‐ hood estimation. The sampling criterion is first optimized randomly with a dense set of ran‐ dom points, then the best solution serves as a starting point for applying gradient descent locally to refine the solution. The stopping criterion was set to a maximum of 120 evaluated data points.

**Figure 7** shows the scores for all evaluated samples on both objectives. The bullets and squares represent the samples forming the Pareto front. The black‐box constraints were learned as the optimization was proceeding, hence many evaluated samples do not satisfy the constraints (as this was unknown at that time): 8% of the evaluated samples however satisfy the constraints. Fortunately, four of them are Pareto optimal and represent valid optimal configurations. The exact optimal Pareto front was unknown upfront: in order to provide a comparison and verify the integrity of the identified solutions the traditional NSGA‐II [43] multi‐objective optimiza‐ tion algorithm was applied directly on the CFD simulations for a total of 10,000 evaluations. Clearly, the Pareto optimal solutions found by the surrogate‐based approach form a similar

wall where they spiral in the downward direction. Eventually, the particles are collected in the dustbin (or flow out through a dipleg) located at the bottom of the conical section of the cyclone body. The cleaned gas leaves through the exit pipe at the top. The cyclone geometry [41] is described by seven geometrical parameters: the inlet height *a*, width *b*, the vortex finder

Modifying these parameters has an impact on the performance and behaviour of the gas

and cone‐tip diameter *Bc*

.

, and length *S*, cylinder height *h*, cyclone total height *Ht*

diameter *Dx*

184 Computer Simulation

**Figure 6.** Cyclone: illustration of a gas cyclone.

cyclone itself.

the width of the satellite). Using surrogate models, the aim is to find the parameters, which

Surrogate Modelling with Sequential Design for Expensive Simulation Applications

http://dx.doi.org/10.5772/67739

187

To approach this problem, the process starts from a LHD of 20 points. Iteratively, a space‐ filling sequential design selects 10 additional samples for evaluation on the simulator. The space‐filling approach incorporates both the maximin distance and projective properties as described in Ref. [46]. The model type selected is a GP with the Matérn 3/2 covariance func‐ tion with Automated Relevance Determination (ARD). When 300 samples were evaluated, the process was terminated and the analytical approach presented in Ref. [6] was used to compute the first‐order Sobol indices, as well as the total Sobol indices (first‐order indices augmented with all indices of higher order interactions containing this parameter) [4]. Both indices are

From the results, it can be clearly observed that the mass of the tip has no impact at all on the displacement angle. It can therefore be disregarded from any further design decisions. All other parameters do have some impact, as expected. The deployment speed clearly has the

influence the displacement angle the most.

**Figure 9.** Satellite: Sobol indices for the final GP model.

**Figure 8.** Satellite: illustration of the breaking system.

plotted in **Figure 9**.

**Figure 7.** Cyclone: Pareto front obtained after 120 evaluations for the gas cyclone optimization. The plot also distinguishes between feasible and infeasible points and shows the Pareto front obtained by NSGA‐II after an extensive 10,000 evaluations.

front. Our approach was able to identify these solutions with significantly fewer evaluations and hence significantly faster. Therefore, the identified Pareto front is a very good approxima‐ tion given the budget constraint of 120 evaluations.

#### **5.3. Satellite braking system**

Finally, we demonstrate the use of surrogate models for performing analysis into the rel‐ evance of input parameters. A simulation of a braking system of the Aalto‐1 student satel‐ lite [44, 45] was modelled with sequential design. The brake consists of a small mass *m* attached to a tether, which is extended at a constant speed *v*feed. The satellite is spinning around with an angular velocity *ω*sat, which is also the angular velocity of the mass at the beginning of the deployment. As the distance of the tip of the tether to the satellite increases, the angular velocity of the tip, *ω*tip, decreases. This results in a displacement angle *γ* of the tether from its initial balance position. This causes a tangential force nega‐ tive to the rotational direction of the satellite, causing it to spin around slower. The same tangential force accelerates the tip, which results in a decrease of the angle, until the tether has extended sufficiently again to further decrease the rotation of the satellite. **Figure 8** illustrates the setup graphically.

Although the displacement angle effectively causes the braking effect, it must remain within an acceptable range to prevent a range of undesired effects and issues. To this end, a simula‐ tion for *γ* was developed with five input parameters (the time after deployment time, initial angular velocity of the satellite, the mass of the tip, the deployment speed of the tether and

**Figure 8.** Satellite: illustration of the breaking system.

front. Our approach was able to identify these solutions with significantly fewer evaluations and hence significantly faster. Therefore, the identified Pareto front is a very good approxima‐

**Figure 7.** Cyclone: Pareto front obtained after 120 evaluations for the gas cyclone optimization. The plot also distinguishes between feasible and infeasible points and shows the Pareto front obtained by NSGA‐II after an extensive

Finally, we demonstrate the use of surrogate models for performing analysis into the rel‐ evance of input parameters. A simulation of a braking system of the Aalto‐1 student satel‐ lite [44, 45] was modelled with sequential design. The brake consists of a small mass *m* attached to a tether, which is extended at a constant speed *v*feed. The satellite is spinning around with an angular velocity *ω*sat, which is also the angular velocity of the mass at the beginning of the deployment. As the distance of the tip of the tether to the satellite increases, the angular velocity of the tip, *ω*tip, decreases. This results in a displacement angle *γ* of the tether from its initial balance position. This causes a tangential force nega‐ tive to the rotational direction of the satellite, causing it to spin around slower. The same tangential force accelerates the tip, which results in a decrease of the angle, until the tether has extended sufficiently again to further decrease the rotation of the satellite. **Figure 8**

Although the displacement angle effectively causes the braking effect, it must remain within an acceptable range to prevent a range of undesired effects and issues. To this end, a simula‐ tion for *γ* was developed with five input parameters (the time after deployment time, initial angular velocity of the satellite, the mass of the tip, the deployment speed of the tether and

tion given the budget constraint of 120 evaluations.

**5.3. Satellite braking system**

10,000 evaluations.

186 Computer Simulation

illustrates the setup graphically.

the width of the satellite). Using surrogate models, the aim is to find the parameters, which influence the displacement angle the most.

To approach this problem, the process starts from a LHD of 20 points. Iteratively, a space‐ filling sequential design selects 10 additional samples for evaluation on the simulator. The space‐filling approach incorporates both the maximin distance and projective properties as described in Ref. [46]. The model type selected is a GP with the Matérn 3/2 covariance func‐ tion with Automated Relevance Determination (ARD). When 300 samples were evaluated, the process was terminated and the analytical approach presented in Ref. [6] was used to compute the first‐order Sobol indices, as well as the total Sobol indices (first‐order indices augmented with all indices of higher order interactions containing this parameter) [4]. Both indices are plotted in **Figure 9**.

From the results, it can be clearly observed that the mass of the tip has no impact at all on the displacement angle. It can therefore be disregarded from any further design decisions. All other parameters do have some impact, as expected. The deployment speed clearly has the

**Figure 9.** Satellite: Sobol indices for the final GP model.

highest impact. This is intuitive, as faster deployment results in more significant differences between the angular velocities of the satellite and the tip.

[4] Sobol IM. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics and computers in simulation. 2001;**55**(1‐3):271‐280. DOI:

Surrogate Modelling with Sequential Design for Expensive Simulation Applications

http://dx.doi.org/10.5772/67739

189

[5] Keiichi I, Couckuyt I, Poles S, Dhaene T. Variance‐based interaction index measur‐ ing heteroscedasticity. Computer Physics Communications. 2016;**203**:152‐161. DOI:

[6] Van Steenkiste T, van der Herten J, Couckuyt I, Dhaene T. Sensitivity Analysis of Expensive Black‐box Systems Using Metamodeling. In: Roeder TMK, Frazier PI, Szechtman R, Zhou E, Huschka T, Chick SE, editors. Proceedings of the 2016 Winter Simulation Conference; 11‐14 December 2016; Washington, D.C., USA; 2016. Institute of

[7] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks; 27 November–1 December 1995; IEEE;

[8] Dellino G, Kleijnen JPC, Meloni C. Robust simulation‐optimization using metamodels. In: Rossetti MD, Hill RR, Johansson B, Dunkin A, Ingalls RG, editors. Proceedings of the Winter Simulation Conference; 13‐16 December 2016; Austin, TX, USA: IEEE; 2009. p.

[9] Couckuyt I, Aernouts J, Deschrijver D, De Turck F, Dhaene T. Identification of quasi‐opti‐ mal regions in the design space using surrogate modeling. Engineering with Computers.

[10] Vapnik V. The Nature of Statistical Learning Theory. 2nd ed. New York: Springer‐Verlag;

[11] Sacks J, Welch WJ, Mitchell TJ, Wynn HP. Design and analysis of computer experiments.

[12] Kleijnen, JPC. Design and Analysis of Simulation Experiments. 2nd ed. Springer International Publishing; New York City, USA. 2015. 337 p. DOI: 10.1007/978‐3‐319‐180

[13] Montgomery, DC. Design and Analysis of Experiments. 8th ed. John Wiley & Sons;

[14] Van Dam ER, Husslage B, Den Hertog D, Melissen H. Maximin Latin hypercube designs in two dimensions. Operations Research. 2007;**55**(1):158‐169. DOI: 10.1287/opre.

[15] Box GEP, Behnken D. Some new three level designs for the study of quantitative vari‐

[16] Raymond HM, Montgomery DC, Anderson‐Cook CM. Response Surface Methodology: Process and Product Optimization Using Designed Experiments. 4th ed. John Wiley &

ables. Technometrics. 1960;**2**:455‐475. DOI: 10.1080/00401706.1960.10489912

Hoboken, New Jersey, USA. 2008. 752 p. DOI: 10.1002/qre.458

Electrical and Electronics Engineers, Inc. DOI: 10.1109/WSC.2016.7822123

Piscataway, New Jersey, USA. 1995. DOI: 10.1109/ICNN.1995.488968

10.1016/S0378‐4754(00)00270‐6

540‐550. DOI: 10.1109/WSC.2009.5429720

2000. 334 p. DOI: 10.1007/978‐1‐4757‐3264‐1

Sons; Hoboken, New Jersey, USA. 2016. 856 p.

Statistical Science. 1989;**4**(4):409‐423.

87‐8

1060.0317

2013;**29**(2):127‐138. DOI: 10.1007/s00366‐011‐0249‐3

10.1016/j.cpc.2016.02.032

## **6. Conclusion**

The benefits of surrogate modelling techniques have proven to be successful to work with expensive simulations and expensive objectives in general. Within this flexible methodol‐ ogy, and complemented with intelligent sequential sampling (sequential design) several tasks ranging from design space exploration, sensitivity analysis to (multi‐objective) optimi‐ zation with constraints can be accomplished efficiently with only a small number of evalu‐ ations. This greatly enhances the capabilities to virtually design complex systems, reducing the time and costs of product development cycles resulting in a shorter time‐to‐market. The strengths and possibilities were demonstrated on a few real world examples from different domains.

## **Acknowledgements**

Ivo Couckuyt is a post‐doctoral research fellow of FWO‐Vlaanderen. The authors would like to thank NXP Semiconductors and Jeroen Croon in particular for providing the LNA simula‐ tor code.

## **Author details**

Joachim van der Herten\*, Tom Van Steenkiste, Ivo Couckuyt and Tom Dhaene

\*Address all correspondence to: joachimvanderherten@intec.ugent.be

Department of Information Technology, IDLab, Ghent University ‐ imec, Ghent, Belgium

## **References**


[4] Sobol IM. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics and computers in simulation. 2001;**55**(1‐3):271‐280. DOI: 10.1016/S0378‐4754(00)00270‐6

highest impact. This is intuitive, as faster deployment results in more significant differences

The benefits of surrogate modelling techniques have proven to be successful to work with expensive simulations and expensive objectives in general. Within this flexible methodol‐ ogy, and complemented with intelligent sequential sampling (sequential design) several tasks ranging from design space exploration, sensitivity analysis to (multi‐objective) optimi‐ zation with constraints can be accomplished efficiently with only a small number of evalu‐ ations. This greatly enhances the capabilities to virtually design complex systems, reducing the time and costs of product development cycles resulting in a shorter time‐to‐market. The strengths and possibilities were demonstrated on a few real world examples from different

Ivo Couckuyt is a post‐doctoral research fellow of FWO‐Vlaanderen. The authors would like to thank NXP Semiconductors and Jeroen Croon in particular for providing the LNA simula‐

Joachim van der Herten\*, Tom Van Steenkiste, Ivo Couckuyt and Tom Dhaene

Department of Information Technology, IDLab, Ghent University ‐ imec, Ghent, Belgium

[1] Goethals K, Couckuyt I, Dhaene T, Janssens A. Sensitivity of night cooling performance to room/system design: surrogate models based on CFD. Building and Environment.

[2] Simpson TW, Booker AJ, Ghosh D, Giunta AA, Koch PN, Yang R‐J. Approximation methods in multidisciplinary analysis and optimization: a panel discussion. Structural and Multidisciplinary Optimization. 2004;**27**(5):302‐313. DOI: 10.1007/s00158‐004‐0389‐9

[3] Saltelli A. Sensitivity analysis for importance assessment. Risk Analysis. 2002;**22**(3):579‐

\*Address all correspondence to: joachimvanderherten@intec.ugent.be

2012;**58**:23‐36. DOI: 10.1016/j.buildenv.2012.06.015

590. DOI: 10.1111/0272‐4332.00040

between the angular velocities of the satellite and the tip.

**6. Conclusion**

188 Computer Simulation

domains.

tor code.

**Author details**

**References**

**Acknowledgements**


[17] Hendrickx W, Dhaene T. Sequential design and rational metamodelling. In: Kuh ME, Steiger NM, Armstrong FB, Joines JA, editors. Proceedings of the Winter Simulation Conference; 4 December 2005; Orlando, FL, USA: IEEE; 2005. pp. 290‐298. DOI: 10.1109/ WSC.2005.1574263

[30] Močkus, J. Marchuk, G.I. On Bayesian methods for seeking the extremum. In: Opti‐ mization Techniques IFIP Technical Conference; Springer; Novosibirsk, Berlin, Hei‐

Surrogate Modelling with Sequential Design for Expensive Simulation Applications

http://dx.doi.org/10.5772/67739

191

[31] Jones DR, Schonlau M, Welch WJ. Efficient global optimization of expensive black‐box functions. Journal of Global Optimization. 1998;**13**(4):455‐492. DOI: 10.1023/A:1008306

[32] Scott W, Frazier PI, Powell W. The correlated knowledge gradient for simulation opti‐ mization of continuous parameters using Gaussian process regression. SIAM Journal on

[33] Hernández‐Lobato JM, Hoffman MW, Ghahramani Z. Predictive entropy search for effi‐ cient global optimization of black‐box functions. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ, editors. Advances in Neural Information Processing Systems 27; 8‐13 December 2014; Montreal, Canada: Curran Associates, Inc.; 2014. p.

[34] Gardner J, Kusner M, Weinberger KQ, Cunningham J, Xu, Z. Bayesian optimization with inequality constraints. In: Jebara T, Xing EP, editors. Proceedings of the 31st International Conference on Machine Learning (ICML‐14); 21‐26 June 2014; Beijing, China: JMLR.org;

[35] Emmerich M, Beume N, Naujoks, B. Coello Coello C.A., Hernández Aguirre A., Zitzler E. An EMO algorithm using the hypervolume measure as selection criterion. In: International Conference on Evolutionary Multi‐Criterion Optimization; 9‐11 March

[36] Couckuyt I, Deschrijver D, Dhaene T. Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization. Journal of

[37] Emmerich M, Hingston P, Deutz AH, Klinkenberg JW. Hypervolume‐based expected improvement: monotonicity properties and exact computation. In: IEEE Congress on Evolutionary Computation (CEC); 5‐8 June 2011; New Orleans, LA, USA: IEEE; 2011. p.

[38] Hernández‐Lobato D, Hernández‐Lobato JM, Shah A, Adams RP. Predictive Entropy Search for Multi‐objective Bayesian Optimization. In: Balcan MF, Weinberger, KQ, edi‐ tors. Proceedings of the 33rd International Conference on Machine Learning (ICML‐16);

[39] Gorissen D, Crombecq K, Couckuyt I, Demeester P, Dhaene T. A surrogate modeling and adaptive sampling toolbox for computer based design. Journal of Machine Learning

[40] van der Herten J, Couckuyt I, Deschrijver D, Dhaene T. Adaptive classification under computational budget constraints using sequential data gathering. Advances in

Engineering Software. 2016;**99**:137‐146. DOI: 10.1016/j.advengsoft.2016.05.016

2005; Guanajuato, Mexico: Springer Berlin Heidelberg; 2005. p. 62‐76.

Global Optimization. 2013;**60**(3):575‐594. DOI: 10.1007/s10898‐013‐0118‐2

19‐24 June 2016; Manhattan, New York: JMLR.org; 2016. p. 1492‐1501.

2147‐2154. DOI: 10.1109/CEC.2011.5949880

Research. 2010;**11**:2051‐2055.

Optimization. 2011;**21**(3):996‐1026. DOI: 10.1137/100801275

delberg. 1975. p. 400‐404.

431147

918‐926.

2014. pp. 937‐945.


[30] Močkus, J. Marchuk, G.I. On Bayesian methods for seeking the extremum. In: Opti‐ mization Techniques IFIP Technical Conference; Springer; Novosibirsk, Berlin, Hei‐ delberg. 1975. p. 400‐404.

[17] Hendrickx W, Dhaene T. Sequential design and rational metamodelling. In: Kuh ME, Steiger NM, Armstrong FB, Joines JA, editors. Proceedings of the Winter Simulation Conference; 4 December 2005; Orlando, FL, USA: IEEE; 2005. pp. 290‐298. DOI: 10.1109/

[18] Jin R, Chen W, Sujianto A. An efficient algorithm for constructing optimal design of computer experiments. Journal of Statistical Planning and Inference. 2005;**134**(1):268‐

[19] Niederreiter, H. Random Number Generation and Quasi‐Monte Carlo Methods. 1st ed. Society for Industrial and Applied Mathematics; Philadelphia, Pennsylvania, United

[20] Viana FAC, Gerhard V, Vladimir B. An algorithm for fast optimal Latin hypercube design of experiments. International Journal for Numerical Methods in Engineering.

[21] Gorissen D. Grid‐enabled adaptive surrogate modeling for computer aided engineering [dissertation]. Ghent, Belgium: Ghent University. Faculty of Engineering; 2010. 384 p.

[22] Couckuyt I, Dhaene T, Demeester P. ooDACE Toolbox: a flexible object‐oriented Kriging

[23] Kleijnen JPC, van Beers WCM. Application‐driven sequential designs for simulation experiments: kriging metamodelling. Journal of the Operational Research Society.

[24] Sasena, MJ. Flexibility and efficiency enhancements for constrained global design opti‐

[25] Farhang‐Mehr A, Azarm S. Bayesian meta‐modelling of engineering design simula‐ tions: a sequential approach with adaptation to irregularities in the response behaviour. International Journal for Numerical Methods in Engineering. 2005;**62**(15):2104‐2126.

[26] Garnett R, Osborne M, Hennig P. Active learning of linear embeddings for Gaussian pro‐ cesses. In: Zhang ML, Tian J, editors. Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence; 23‐27 July 2014; Quebec, Canada: AUAI Press; 2014. p. 230‐239.

[27] Crombecq K, Gorissen D, Deschrijver D, Dhaene T. A novel hybrid sequential design strategy for global surrogate modeling of computer experiments. SIAM Journal on

[28] van der Herten J, Couckuyt I, Deschrijver D, Dhaene T. A fuzzy hybrid sequential design strategy for global surrogate modeling of high‐dimensional computer experiments. SIAM Journal on Scientific Computing. 2015;**37**(2):A1020–A1039. DOI: 10.1137/140962437

[29] Regis, RG, Shoemaker CA. Constrained global optimization of expensive black box func‐ tions using radial basis functions. Journal of Global Optimization. 2005;**31**(1):153‐171.

Scientific Computing. 2011;**33**(4):1948‐1974. DOI: 10.1137/090761811

Available from: https://biblio.ugent.be/publication/1163941/file/4335278.pdf

implementation. Journal of Machine Learning Research. 2014;**15**:3183‐3186.

WSC.2005.1574263

190 Computer Simulation

DOI: 10.1002/nme.1261

DOI: 10.1007/s10898‐004‐0570‐0

287. DOI: 10.1016/j.jspi.2004.02.014

States. 1992. 241 p. DOI: 10.1137/1.9781611970081

2004;**55**(8):876‐883. DOI: 10.1057/palgrave.jors.2601747

mization with kriging approximations [dissertation]. 2002.

2010;**82**(2):135‐156. DOI: 10.1002/nme.2750


[41] Elsayed K. Optimization of the cyclone separator geometry for minimum pressure drop using Co‐Kriging. Powder Technology. 2015;**269**:409‐424. DOI: 10.1016/j.powtec.2015. 09.003

**Chapter 9**

**Computer Simulation of High‐Frequency**

High-frequency and microwave electromagnetic fields are used in billions of various devices and systems. Design of these systems is impossible without detailed analysis of their electromagnetic field. Most of microwave systems are very complex, so analytical solution of the field equations for them is impossible. Therefore, it is necessary to use numerical methods of field simulation. Unfortunately, such complex devices as, for example, modern smartphones cannot be accurately analysed by existing commercial codes. The chapter contains a short review of modern numerical methods for Maxwell's equations solution. Among them, a vector finite element method is the most suitable for simulation of complex devices with hundreds of details of various forms and materials, but electrically not too large. The method is implemented in the computer code radio frequency simulator (RFS). The code has friendly user interface, an advanced mesh generator, efficient solver and post-processor. It solves eigenmode problems, driven waveguide problems, antenna problems, electromagnetic-compatibility problems and

Keywords: electromagnetics, numerical methods, computer simulation, microwaves,

High-frequency electromagnetic fields are used now in telecommunications and radar systems, astrophysics, plasma heating and diagnostics, biology, medicine, technology and many other applications. Special electromagnetic systems excite and guide these fields with given time and space distribution. A designer or a researcher of such systems ought to know in detail their electromagnetic field characteristics. This goal can be achieved or by experimental study,

> © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**Electromagnetic Fields**

Additional information is available at the end of the chapter

Andrey D. Grigoriev

Abstract

http://dx.doi.org/10.5772/67497

others in frequency domain.

cellular phones

1. Introduction

