**1. Introduction**

260 Simulated Annealing – Single and Multiple Objective Problems

232, 2012.

[14] Y. Sheng, A. Takahashi and S. Ueno, "2-Stage Simulated Annealing with Crossover Operator for 3D-Packing Volume Minimization," *Proceedings of the 17th Workshop on Synthesis And System Integration of Mixed Information technologies (SASIMI)*, pages 227-

> The design of analog integrated circuits is complex because it involves several aspects of device modeling, computational methodologies, and human experience. Nowadays, the well-stablished CMOS (Complementary Metal-Oxide-Semiconductor) technology is mandatory in most of the integrated circuits. The basic devices are MOS transistors, whose manufacturing process is well understood and constantly updated in the design of small devices. Detailed knowledge of the devices technology is needed for modeling all aspects of analog design, since there is a strong dependency between the circuit behavior and the manufacturing process.

> Contrary to digital circuits, which are composed by millions (or even billions) of transistors with equal dimensions, analog circuits are formed by tens of transistors, but each one with a particular geometric feature and bias operation point. Digital design is characterized by the high degree of automation, in which the designer has low influence on the resulting physical circuit. The quality of the CAD (Computer-Aided Design) tools used for circuit synthesis is much more important than the designer experience. These tools are able to deal with a large number of devices and interconnections. Digital binary circuits have robustness characteristics in which the influence of non-linearities and non-idealities are not a major concern. Furthermore, mathematical models of devices for digital circuits are relaxed and computationally very efficient.

> On the other hand, analog design still lacks from design automation. This is a consequence of the problem features and the difficulty of implementing generic tools with high design accuracy. Thus, the complex relations between design objectives and design variables result in a highly non-linear n-dimensional system. Technology dependency limits the design automation, since electrical behavior is directly related to physical implementation. In

©2012 Girardi et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ©2012 Girardi et al., licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### 2 Will-be-set-by-IN-TECH 262 Simulated Annealing – Single and Multiple Objective Problems Simulated Annealing to Improve Analog Integrated Circuit Design: Trade-Offs and Implementation Issues <sup>3</sup>

addition, the large number of different circuit topologies, each one with unique details, makes modeling a very difficult task.

**2. Simulated annealing**

times steps *Ft*+<sup>1</sup> and *Ft*.

are described below.

**2.1. Boltzmann annealing**

*cooling function* is described as:

probability is given by an *acceptance function h*(Δ*F*):

optima.

The Simulated Annealing (SA) is a well known random-search technique that exploits an analogy with the way a metal heat and slowly freezes into a minimum energy crystalline structure, the so called annealing process. In a more general system, like an optimization problem, it is used for searching the minimum value of a cost function, avoiding getting trapped in local minima. The algorithm employs random searches which, besides accepting solutions that decrease (i.e. minimize) the objective cost function, may also accept some that increase it. The latter are called "indirect steps", and are allowed in order to escape from local

Simulated Annealing to Improve Analog Integrated Circuit Design: Trade-Off s and Implementation Issues 263

The SA algorithm uses a *cooling function T*(*t*), which maps a time instant *t* to a temperature *T*, decreasing *T* as *t* increases. At each iteration, new steps are randomly taken, based on a *probabilistic state generation function g*(*X*), leading to new states in the solution space. In this context *X* is a vector of *d* parameters, where *d* is the dimensionality of the solution space. If a step leads to a state with a worse solution, it is only effectively taken, i.e. the new state is accepted, with a probability less than 1. States with better solutions are always accepted. This

*<sup>h</sup>*(Δ*F*) = <sup>1</sup>

Here, Δ*F* = *Ft*+<sup>1</sup> − *Ft* represents the variation of the cost function calculated at two consecutive

The algorithm is able to reach an optimal solution on the choice of the *cooling function* and *probabilistic state generation function*. If the temperature in *cooling function* decreases too fast, the search will run faster, but the SA algorithm is not guaranteed to find the global optimum anymore [13]. This may be acceptable if a solution is needed in a small amount of time and the solution space is well-know or presents high dimensionality. This is called Simulated Quenching (SQ) [1], and is useful when an approximate solution is sufficient. There are some common sets of options to choose from when implementing an SA algorithm. They

The Boltzmann annealing is the classical simulated annealing algorithm, using physics principles to choose the *probabilistic state generation function* in order to ensure convergence

Here, Δ*X* = *X* − *X*<sup>0</sup> and *d* is the number of dimensionality of the search space. The Boltzman

*TBoltz*(*t*) = *<sup>T</sup>*<sup>0</sup>

Geman and Geman in the classical paper [7] have proved that using Gaussian distribution to generate new states (Eq. 3) with the Boltzman *cooling function* (Eq. 2) is sufficient to reach

(2*πT*(*t*))*d*/2 exp (−(Δ*X*)<sup>2</sup>

to a global minimum. It employs a Gaussian distribution for generating new states:

*gBoltz*(*X*) = <sup>1</sup>

where *T*<sup>0</sup> is the initial temperature, and *t* is the time step.

global minimum of an optimization function at infinite time.

<sup>1</sup> <sup>+</sup> *exp*(Δ*F*/*T*) (1)

<sup>2</sup>*T*(*t*) ) (2)

log (*t*) (3)

In general, the traditional analog design flow is based on the repetition of manual optimization and SPICE (Simulation Program with Integrated Circuit Emphasis) electrical simulations. For a given specification, a circuit topology is captured in a netlist containing devices and interconnections. Devices sizes, such as transistors width and length, or resistors and capacitors values, are calculated manually. The verification is performed with the aid of SPICE models and technology parameters in order to predict the final performance in silicon. Specific design goals such as dissipated power, voltage gain, or phase margin are achieved by manual calculation and then re-verified in simulation. Once the final performance is met, the design is passed on to a physical design engineer to complete the layout, perform design rule checks, and layout versus schematic verification. The layout engineer passes the extracted physical design information back to the circuit designer to recheck the circuit operation on the electrical level. When physical effects cause the circuit to miss specifications, several more iterations of this circuit-to-layout loop may be required. This process is repeated for each analog block in the circuit, even for making any relatively simple specification change. The amount of time and human resources used can vary, depending on the design complexity and the designer experience. However, even for a large and most skilled design team, the short time-to-market and strict design objectives are key issues of analog designs. Improvements in the analog design automation can save design time and effort.

In this chapter we analyze the Simulated Annealing (SA) meta-heuristic applied to adjust circuit parameters in transistor sizing automation procedure at electrical level. Previous works have been done in the field of analog design automation to enable fast design at the block level. Different strategies and approaches have been proposed during the evolution of analog design automation, such as simulation-based optimization [5, 9, 17], symbolic simulation [10], artificial intelligence [6], manually derived design equations [4, 21], hierarchy and topology selection [11], geometric programming [12, 16] and memetic algorithms [15]. The main difficulty encountered for wide spread usage of these tools is that they require appropriate modeling of both devices (technology dependent) and circuit topologies in order to achieve the design objectives in a reasonable processing time.

Moreover, the option of choosing different circuit topologies is also difficult to implement in a design methodology or tool, since most approaches work with topology-based equations, limiting the application range. The possibility of adding new block topologies must also be included in the methodology, since it is critical to the design. The usage of optimization algorithms combined with design techniques seems to be a good solution when applied to specific applications. This is because a general solution most often proves to have short comings for fully exploiting the capabilities of the analog CMOS technology. The key requirements of an analog synthesis tool are: interactivity with the user, flexibility for multiple topologies and reasonable response time. The interface with an electrical simulator and with a layout editor is also convenient [8].

The remaining of this chapter is organized as follows. Section 2 explains the Simulated Annealing meta-heuristic, its parameters, and functions. Circuit modeling, as well as the parameters and functions involved, are described in Sec. 3. Afterward, Sec. 4 presents a basic circuit used to explain the usage of SA, how the searches occur, and the results achieved. In Sec. 5, Simulated Annealing is used to seek solutions to a more complex circuit, in which we could analyze the impact of SA parameters and functions as a mean to automate circuit design. Finally, Sec. 6 conclude this chapter with our final remarks and future works.

262 Simulated Annealing – Single and Multiple Objective Problems Simulated Annealing to Improve Analog Integrated Circuit Design: Trade-Offs and Implementation Issues <sup>3</sup> Simulated Annealing to Improve Analog Integrated Circuit Design: Trade-Off s and Implementation Issues 263

### **2. Simulated annealing**

2 Will-be-set-by-IN-TECH

addition, the large number of different circuit topologies, each one with unique details, makes

In general, the traditional analog design flow is based on the repetition of manual optimization and SPICE (Simulation Program with Integrated Circuit Emphasis) electrical simulations. For a given specification, a circuit topology is captured in a netlist containing devices and interconnections. Devices sizes, such as transistors width and length, or resistors and capacitors values, are calculated manually. The verification is performed with the aid of SPICE models and technology parameters in order to predict the final performance in silicon. Specific design goals such as dissipated power, voltage gain, or phase margin are achieved by manual calculation and then re-verified in simulation. Once the final performance is met, the design is passed on to a physical design engineer to complete the layout, perform design rule checks, and layout versus schematic verification. The layout engineer passes the extracted physical design information back to the circuit designer to recheck the circuit operation on the electrical level. When physical effects cause the circuit to miss specifications, several more iterations of this circuit-to-layout loop may be required. This process is repeated for each analog block in the circuit, even for making any relatively simple specification change. The amount of time and human resources used can vary, depending on the design complexity and the designer experience. However, even for a large and most skilled design team, the short time-to-market and strict design objectives are key issues of analog designs. Improvements in the analog

In this chapter we analyze the Simulated Annealing (SA) meta-heuristic applied to adjust circuit parameters in transistor sizing automation procedure at electrical level. Previous works have been done in the field of analog design automation to enable fast design at the block level. Different strategies and approaches have been proposed during the evolution of analog design automation, such as simulation-based optimization [5, 9, 17], symbolic simulation [10], artificial intelligence [6], manually derived design equations [4, 21], hierarchy and topology selection [11], geometric programming [12, 16] and memetic algorithms [15]. The main difficulty encountered for wide spread usage of these tools is that they require appropriate modeling of both devices (technology dependent) and circuit topologies in order to achieve

Moreover, the option of choosing different circuit topologies is also difficult to implement in a design methodology or tool, since most approaches work with topology-based equations, limiting the application range. The possibility of adding new block topologies must also be included in the methodology, since it is critical to the design. The usage of optimization algorithms combined with design techniques seems to be a good solution when applied to specific applications. This is because a general solution most often proves to have short comings for fully exploiting the capabilities of the analog CMOS technology. The key requirements of an analog synthesis tool are: interactivity with the user, flexibility for multiple topologies and reasonable response time. The interface with an electrical simulator and with

The remaining of this chapter is organized as follows. Section 2 explains the Simulated Annealing meta-heuristic, its parameters, and functions. Circuit modeling, as well as the parameters and functions involved, are described in Sec. 3. Afterward, Sec. 4 presents a basic circuit used to explain the usage of SA, how the searches occur, and the results achieved. In Sec. 5, Simulated Annealing is used to seek solutions to a more complex circuit, in which we could analyze the impact of SA parameters and functions as a mean to automate circuit

design. Finally, Sec. 6 conclude this chapter with our final remarks and future works.

modeling a very difficult task.

design automation can save design time and effort.

the design objectives in a reasonable processing time.

a layout editor is also convenient [8].

The Simulated Annealing (SA) is a well known random-search technique that exploits an analogy with the way a metal heat and slowly freezes into a minimum energy crystalline structure, the so called annealing process. In a more general system, like an optimization problem, it is used for searching the minimum value of a cost function, avoiding getting trapped in local minima. The algorithm employs random searches which, besides accepting solutions that decrease (i.e. minimize) the objective cost function, may also accept some that increase it. The latter are called "indirect steps", and are allowed in order to escape from local optima.

The SA algorithm uses a *cooling function T*(*t*), which maps a time instant *t* to a temperature *T*, decreasing *T* as *t* increases. At each iteration, new steps are randomly taken, based on a *probabilistic state generation function g*(*X*), leading to new states in the solution space. In this context *X* is a vector of *d* parameters, where *d* is the dimensionality of the solution space. If a step leads to a state with a worse solution, it is only effectively taken, i.e. the new state is accepted, with a probability less than 1. States with better solutions are always accepted. This probability is given by an *acceptance function h*(Δ*F*):

$$h(\Delta F) = \frac{1}{1 + \exp(\Delta F/T)}\tag{1}$$

Here, Δ*F* = *Ft*+<sup>1</sup> − *Ft* represents the variation of the cost function calculated at two consecutive times steps *Ft*+<sup>1</sup> and *Ft*.

The algorithm is able to reach an optimal solution on the choice of the *cooling function* and *probabilistic state generation function*. If the temperature in *cooling function* decreases too fast, the search will run faster, but the SA algorithm is not guaranteed to find the global optimum anymore [13]. This may be acceptable if a solution is needed in a small amount of time and the solution space is well-know or presents high dimensionality. This is called Simulated Quenching (SQ) [1], and is useful when an approximate solution is sufficient. There are some common sets of options to choose from when implementing an SA algorithm. They are described below.

#### **2.1. Boltzmann annealing**

The Boltzmann annealing is the classical simulated annealing algorithm, using physics principles to choose the *probabilistic state generation function* in order to ensure convergence to a global minimum. It employs a Gaussian distribution for generating new states:

$$g\_{Boltz}(X) = \frac{1}{(2\pi T(t))^{d/2}} \exp\left(-\frac{(\Delta X)^2}{2T(t)}\right) \tag{2}$$

Here, Δ*X* = *X* − *X*<sup>0</sup> and *d* is the number of dimensionality of the search space. The Boltzman *cooling function* is described as:

$$T\_{Boltz}(t) = \frac{T\_0}{\log(t)}\tag{3}$$

where *T*<sup>0</sup> is the initial temperature, and *t* is the time step.

Geman and Geman in the classical paper [7] have proved that using Gaussian distribution to generate new states (Eq. 3) with the Boltzman *cooling function* (Eq. 2) is sufficient to reach global minimum of an optimization function at infinite time.

#### **2.2. Fast annealing**

Fast Annealing is a variant of the Boltzmann Annealing [20] that uses as *probabilistic state generation function* the Cauchy distribution:

$$g\_{Fast}(X) = \frac{T}{(\Delta X^2 + T(t))^{(d+1)/2}}\tag{4}$$

In this work the second alternative is used, with the electrical simulation performed by Synopsys HSpice ®. In the optimization procedure of analog integrated circuit design, the heuristic parameters are the MOSFET transistor sizes *W* (channel width) and *L* (channel length), voltage and current sources bias, and capacitors and resistor values. The design flow

Simulated Annealing to Improve Analog Integrated Circuit Design: Trade-Off s and Implementation Issues 265

• *Design constraints* that represent all functions of circuit specifications and variable bounds; • A *technology file* containing simulation model parameters for the MOSFET transistors; and • *SA Options* for the configuration of the SA heuristic, such as temperature function,

The methodology starts with the initial solution generation that is provided by random generated numbers according to the variables bounds values. The circuit specifications of the generated solution are then evaluated by the cost function, which uses the external electrical SPICE simulator. Thereafter, the SA temperature parameter is initialized with the value

Thereafter, a new solution is generated by the SA state generation function (see Section 2), and evaluated by the cost function by means of electrical simulations. The new solution is compared with the current solution and, if it has a lower cost function value, it replaces the current solution. Otherwise, a random number is generated and compared with a probability parameter: if it is greater, the current solution is replaced by the new solution; if smaller, the

Finally, the stopping conditions are verified and, if satisfied, the optimization process ends. If not satisfied, the temperature parameter is reduced by the cooling function and the procedure continues. The stopping conditions usually include a minimum value of temperature, a

For analog design automation, a multi-objective cost function is necessary to aggregate different - and sometimes conflicting - circuit specifications. A typical multi-objective cost

*Si*(*X*) +

In this function, the first sum represents optimization specifications (design objectives) and

*Rj*(*X*) is a function that is dependent on the specification type: minimum required value (*Rmin*(*X*)) or maximum required value (*Rmax*(*X*)) [3]. These functions are shown in Fig. 2, where *a* is the maximum or minimum required value and *b* is the bound value between acceptable and unacceptable performance values. Acceptable but non-feasible performance values are that points between *a* and *b*. They return intermediate values for the constraints functions in order to allow the exploration of disconnect feasible design space regions. These functions return additional cost for the cost function if the performance is outside the desired

*th* constraint function. Both are functions of the vector *X* of design parameters and are

*m* ∑ *j*=1

*Rj*(*X*) (7)

*th* circuit specification value and *Rj*(*X*))

minimum cost function variation, and a maximum number of iterations.

*fc*(*X*) =

normalized and tuned according to the desired circuit performance.

the second one the design constraints. *Si*(*X*) is the *i*

range. Otherwise, the additional cost is zero.

*n* ∑ *i*=1

using Simulated Annealing proposed in this chapter is shown in Figure 1 . The proposed methodology has three specification structures as inputs:

annealing function and stop condition.

specified in the SA options.

new solution is rejected.

function can be:

is the *j*

One advantage of the Cauchy distribution over the Gaussian distribution is its fatter tail. When the temperature decreases, the Cauchy distribution generates new states with a lower dispersion than states generated by a Gaussian distribution. In this way the converge using Cauchy distribution becomes faster.

However, in order to guarantee that the algorithm reaches the global minimum, a special *cooling function* is used:

$$T\_{Fast}(t) = \frac{T\_0}{t} \tag{5}$$

Where *T*<sup>0</sup> is the initial temperature, and *t* is the time step. It is important to show that the *cooling function* used in Boltzmann Annealing (equation 3) decreases more slowly than the *cooling function* used in Fast Annealing (equation 5). This characteristic turns the convergence of Fast annealing faster than Boltzmann annealing.

#### **2.3. Reannealing**

The reannealing method [14] raises the temperature periodically after the algorithm accepts a certain number of new states or after a given number of iterations. Then the search is restarted with a new annealing temperature. The reannealing objective is to avoid local minima, which presents interesting results when applied in nonlinear optimization problems.

#### **2.4. Simulated Quenching**

Simulated Quenching (SQ) [1], described before, is useful when an approximated solution is sufficient and there is a need of faster execution time. An example of the function that can be used to decrease the temperature faster is the exponential *cooling function* shown below.

$$T\_{\rm Exp}(t) = T\_0 \cdot 0.95^t \tag{6}$$

Using this *cooling function* with Boltzmann *state generation function* (Eq. 2) or Fast *state generation function* (Eq. 4) will turn the optimization faster, but without convergence guarantee.

#### **3. Circuit modeling**

In order to design an analog integrated circuit with Simulated Annealing optimizations it is necessary to develop a cost function describing the analog circuit behavior. There are two ways to analyze a circuit behavior. One is based on simplified equations as cost functions, which represent the circuit. This is the faster alternative, but has low precision and limits the solutions in some regions of circuit operation. The other way is to use an external SPICE electrical simulator to evaluate the circuit with a complete model. This alternative provides better accuracy, but demands more computational power.

In this work the second alternative is used, with the electrical simulation performed by Synopsys HSpice ®. In the optimization procedure of analog integrated circuit design, the heuristic parameters are the MOSFET transistor sizes *W* (channel width) and *L* (channel length), voltage and current sources bias, and capacitors and resistor values. The design flow using Simulated Annealing proposed in this chapter is shown in Figure 1 .

The proposed methodology has three specification structures as inputs:

4 Will-be-set-by-IN-TECH

Fast Annealing is a variant of the Boltzmann Annealing [20] that uses as *probabilistic state*

One advantage of the Cauchy distribution over the Gaussian distribution is its fatter tail. When the temperature decreases, the Cauchy distribution generates new states with a lower dispersion than states generated by a Gaussian distribution. In this way the converge using

However, in order to guarantee that the algorithm reaches the global minimum, a special

*TFast*(*t*) = *<sup>T</sup>*<sup>0</sup>

Where *T*<sup>0</sup> is the initial temperature, and *t* is the time step. It is important to show that the *cooling function* used in Boltzmann Annealing (equation 3) decreases more slowly than the *cooling function* used in Fast Annealing (equation 5). This characteristic turns the convergence

The reannealing method [14] raises the temperature periodically after the algorithm accepts a certain number of new states or after a given number of iterations. Then the search is restarted with a new annealing temperature. The reannealing objective is to avoid local minima, which

Simulated Quenching (SQ) [1], described before, is useful when an approximated solution is sufficient and there is a need of faster execution time. An example of the function that can be used to decrease the temperature faster is the exponential *cooling function* shown below.

Using this *cooling function* with Boltzmann *state generation function* (Eq. 2) or Fast *state generation function* (Eq. 4) will turn the optimization faster, but without convergence guarantee.

In order to design an analog integrated circuit with Simulated Annealing optimizations it is necessary to develop a cost function describing the analog circuit behavior. There are two ways to analyze a circuit behavior. One is based on simplified equations as cost functions, which represent the circuit. This is the faster alternative, but has low precision and limits the solutions in some regions of circuit operation. The other way is to use an external SPICE electrical simulator to evaluate the circuit with a complete model. This alternative provides

presents interesting results when applied in nonlinear optimization problems.

(Δ*X*<sup>2</sup> <sup>+</sup> *<sup>T</sup>*(*t*))(*d*+1)/2 (4)

*<sup>t</sup>* (5)

*TExp*(*t*) = *<sup>T</sup>*<sup>0</sup> · 0.95*<sup>t</sup>* (6)

*gFast*(*X*) = *<sup>T</sup>*

**2.2. Fast annealing**

*generation function* the Cauchy distribution:

Cauchy distribution becomes faster.

of Fast annealing faster than Boltzmann annealing.

better accuracy, but demands more computational power.

*cooling function* is used:

**2.3. Reannealing**

**2.4. Simulated Quenching**

**3. Circuit modeling**


The methodology starts with the initial solution generation that is provided by random generated numbers according to the variables bounds values. The circuit specifications of the generated solution are then evaluated by the cost function, which uses the external electrical SPICE simulator. Thereafter, the SA temperature parameter is initialized with the value specified in the SA options.

Thereafter, a new solution is generated by the SA state generation function (see Section 2), and evaluated by the cost function by means of electrical simulations. The new solution is compared with the current solution and, if it has a lower cost function value, it replaces the current solution. Otherwise, a random number is generated and compared with a probability parameter: if it is greater, the current solution is replaced by the new solution; if smaller, the new solution is rejected.

Finally, the stopping conditions are verified and, if satisfied, the optimization process ends. If not satisfied, the temperature parameter is reduced by the cooling function and the procedure continues. The stopping conditions usually include a minimum value of temperature, a minimum cost function variation, and a maximum number of iterations.

For analog design automation, a multi-objective cost function is necessary to aggregate different - and sometimes conflicting - circuit specifications. A typical multi-objective cost function can be:

$$f\_{\mathbb{C}}(X) = \sum\_{i=1}^{n} \mathbb{S}\_{i}(X) + \sum\_{j=1}^{m} \mathbb{R}\_{j}(X) \tag{7}$$

In this function, the first sum represents optimization specifications (design objectives) and the second one the design constraints. *Si*(*X*) is the *i th* circuit specification value and *Rj*(*X*)) is the *j th* constraint function. Both are functions of the vector *X* of design parameters and are normalized and tuned according to the desired circuit performance.

*Rj*(*X*) is a function that is dependent on the specification type: minimum required value (*Rmin*(*X*)) or maximum required value (*Rmax*(*X*)) [3]. These functions are shown in Fig. 2, where *a* is the maximum or minimum required value and *b* is the bound value between acceptable and unacceptable performance values. Acceptable but non-feasible performance values are that points between *a* and *b*. They return intermediate values for the constraints functions in order to allow the exploration of disconnect feasible design space regions. These functions return additional cost for the cost function if the performance is outside the desired range. Otherwise, the additional cost is zero.

#### 6 Will-be-set-by-IN-TECH 266 Simulated Annealing – Single and Multiple Objective Problems Simulated Annealing to Improve Analog Integrated Circuit Design: Trade-Offs and Implementation Issues <sup>7</sup>

*f(Sj) fMAX*

Simulated Annealing to Improve Analog Integrated Circuit Design: Trade-Off s and Implementation Issues 267

(a) (b)

*f(Sj) fMAX*

*VDD*.

*b a*

Unacceptable Acceptable

differential-mode and common-mode input voltages as

circuits insert some non-idealities that limit *AVD* and *AVC*.

[2]. The highest common-mode input voltage (*ICMR*+) is

lowest input voltage at the gate of *M*1 (or *M*2) is found to be

*VGS*<sup>2</sup> is the gate-source voltage of transistor *M*2.

maximum required value specifications.

Feasible

**Figure 2.** Cost function performance metrics: (a) minimum required value specifications and (b)

amplification stage of many electronic devices and has become the dominant choice in today's high-performance analog and mixed-signal circuits [19]. Ideally, it amplifies the difference between two voltages but does not amplify the common-mode voltages. An implementation of the differential amplifier with CMOS transistors and active load is shown in Fig. 3 . It is composed by a differential pair formed by two input transistors (*M*1 and *M*2), an active current mirror (*M*3 and *M*4) and an ideal tail current source *Iref* . The output voltage *Vout* depends on the difference between the input voltages *Vin*<sup>1</sup> and *Vin*2. For a small difference between *Vin*<sup>1</sup> and *Vin*2, both *M*2 and *M*4 are saturated, providing a high gain. Otherwise, if |*Vin*<sup>1</sup> − *Vin*2| is large enough, *M*1 or *M*2 will be off and the output will be stuck at 0*V* or at

The output voltage of the differential amplifier can be expressed in terms of its

*Vout* <sup>=</sup> *AVD*(*Vin*<sup>1</sup> <sup>−</sup> *Vin*2) <sup>±</sup> *AVC Vin*<sup>1</sup> <sup>+</sup> *Vin*<sup>2</sup>

where *AVD* is the differential-mode voltage gain and *AVC* is the common-mode voltage gain. An ideal operational amplifier has an infinite *AVD* and zero *AVC*. Although practical implementations try to find an approximation to these values, the implementation of physical

Another important characteristic of a differential amplifier is the input common-mode range (*ICMR*). We can estimate ICMR by setting *Vin*<sup>1</sup> = *Vin*<sup>2</sup> and vary input common-mode voltage (DC component of *Vin*<sup>1</sup> and *Vin*2) until one of the transistors in the circuit is no longer saturated

Here, *VSG*<sup>3</sup> is the source-voltage of transistor *M*3 and *VTN*<sup>1</sup> is the threshold voltage of *M*1. The

The voltage at node 1 (*V*1) is determined by the physical implementation of the current source *Iref* , which in general is a single transistor whose drain current is controlled by its gate voltage.

*a b*

Acceptable Unacceptable

Feasible

2

*ICMR*<sup>+</sup> <sup>=</sup> *VDD* <sup>−</sup> *VSG*<sup>3</sup> <sup>+</sup> *VTN*<sup>1</sup> (9)

*ICMR*<sup>−</sup> = *VSS* + *V*<sup>1</sup> + *VGS*<sup>2</sup> (10)

(8)

**Figure 1.** Analog integrated circuit sizing with Simulated Annealing flow.
