**2. Theoretical background material**

#### **2.1. Mathematical model of an electronic circuit**

Dynamic behavior of an electronic circuit can be modeled by a system of differential algebraic equations (DAE) involving electric voltages, currents and charges, and magnetic fluxes. The DAE system can, in general, be formulated as

$$\frac{d\boldsymbol{q}(\mathbf{x}(t))}{dt} + \boldsymbol{f}(\mathbf{x}(t)) = \boldsymbol{b}(t),\tag{1}$$

in which *b(t)*∈*ℝ<sup>n</sup>* stands for the excitation vector (independent voltage or current sources) and *x(t)*∈*ℝ<sup>n</sup>* represents the state variable vector (node voltages and branch currents). *f*(·) models the memoryless elements of the circuit, as is the case of linear or nonlinear resistors, linear or nonlinear controlled sources, etc. *q*(·) models the dynamic elements, as capacitors or inductors, represented as voltage-dependent electric charges or current-dependent magnetic fluxes, respectively.

This system of (1) can be constructed from a circuit description using, for example, nodal analysis, which involves applying Kirchhoff currents' law to each node in the circuit, and applying the constitutive or branch equations to each circuit element. Hence, it represents the general mathematical formulation of lumped problems. However, as reviewed in [1], this DAE circuit model formulation can also include linear distributed elements. For that, the distributed devices are substituted by their lumped-element equivalent circuit models, or are replaced, as whole sub-circuits, by reduced order models derived from their frequency-domain character‐ istics. It must be noted that the substitution of distributed devices by lumped-equivalent models is especially reasonable when the size of the circuit elements is small in comparison to the wavelengths, as is the case of most emerging RF technologies integrating digital high-speed CMOS baseband processing and RFCMOS hardware in the same substrate.

## **2.2. Transient simulation**

circuit simulation has been conducted to an increasingly challenging scenario of heterogene‐ ous broadband and strongly nonlinear wireless communication circuits, presenting a wide variety of slowly varying and fast changing state variables (node voltages and branch cur‐ rents). Thus, RF and microwave design has been an important booster for numerical simula‐

In general, waveforms processed by wireless communication systems can be expressed by a high-frequency RF carrier modulated by some kind of slowly varying baseband aperiodic signal (the information signal). Therefore, the evaluation of any relevant information time window requires the computation of thousands or millions of time instants of the composite modulated signal, turning any conventional numerical time-step integration of the circuits' systems of differential algebraic equations highly inefficient. However, if the waveforms do not require too many harmonic components for a convenient frequency-domain representa‐ tion, this category of circuits can be efficiently simulated with hybrid time-frequency techni‐ ques. Handling the response to the slowly varying baseband information signal in the conventional time step by time step basis, but representing the reaction to the periodic RF carrier as a small set of Fourier components (a harmonic balance algorithm for computing the steady-state response to the carrier), hybrid time-frequency techniques are playing an impor‐

Beyond overcoming the signals' time-scale disparity, the partitioned time-frequency technique discussed in Section 3.2 is also able to efficiently simulate highly heterogeneous RF networks, by splitting the circuits into different subsets (blocks) and computing their state variables with

Dynamic behavior of an electronic circuit can be modeled by a system of differential algebraic equations (DAE) involving electric voltages, currents and charges, and magnetic fluxes. The

in which *b(t)*∈*ℝ<sup>n</sup>* stands for the excitation vector (independent voltage or current sources) and *x(t)*∈*ℝ<sup>n</sup>* represents the state variable vector (node voltages and branch currents). *f*(·) models the memoryless elements of the circuit, as is the case of linear or nonlinear resistors, linear or nonlinear controlled sources, etc. *q*(·) models the dynamic elements, as capacitors or inductors, represented as voltage-dependent electric charges or current-dependent magnetic fluxes,

*t t*

+ = *q x fx b* (1)

( ( )) ( ( )) ( ), *d t*

*dt*

tion and device modeling development.

54 Modeling and Simulation in Engineering Sciences

tant role in RF and microwave circuit simulation.

**2. Theoretical background material**

**2.1. Mathematical model of an electronic circuit**

DAE system can, in general, be formulated as

distinct numerical schemes.

respectively.

Obtaining the solution of (1) over a specified time interval *t*0, *tFinal* with a specific initial condition *x(t*0*)*=*x*0 is what is usually known as an *initial value problem*, and evaluating such solution is frequently referred to as *transient analysis*. The most natural way to compute *x*(*t*) is to numerically time-step integrate (1) directly in time domain. So, it should be of no surprise that this straightforward technique was used in the first digital computer programs of circuit analysis and is still nowadays widely used. It is present in all SPICE (which means simulation program with integrated circuit emphasis) or SPICE-like computer programs [2]. In order to numerically time-step integrate the DAE system of (1) commercial tools use initial value solvers, such as linear multistep methods (LMM) [3–5], or one-step methods, i.e., Runge-Kutta (RK) methods [3–5]. Either LMM or RK families can offer a great diversity of explicit and implicit (iterative) numerical schemes, suitable to compute the numerical solution of different types of initial value problems with a desired accuracy.

#### **2.3. Steady-state simulation**

Although SPICE-like computer programs (which were initially conceived to compute the transient response of electronic circuits) are still widely used nowadays, RF and microwave designers' interest normally resides on the steady-state response. The reason for that is some properties of the circuits are better described, or simply only defined, in steady-state (e.g., harmonic or intermodulation distortion, noise, power, gain, impedance, etc.). Initial value solvers, as linear multistep methods, or Runge-Kutta methods, which were tailored for finding the circuit's transient response, are not adequate for computing the steady-state because they have to pass through the lengthy process of integrating all transients, and expecting them to vanish.

Computing the periodic steady-state response of an electronic circuit can be formulated as finding out a starting condition (left boundary), *x*(*t*0), for the DAE system modeling the circuit that leads to a solution obeying the final condition (right boundary) *x(t*0*)*=*x(t*0+*T)*, with *T* being the period. In mathematics, these problems are usually known as *periodic boundary value* *problems*. Taking into account the formulation of (1), these problems will have here the following form,

$$\frac{dq(\mathbf{x}(t))}{dt} + f(\mathbf{x}(t)) = b(t), \quad \mathbf{x}(t\_0) = \mathbf{x}(t\_0 + T), \quad t\_0 \le t \le t\_0 + T,\tag{2}$$

where the condition *x(t*0*)*=*x(t*0+*T)* is known as the *periodic boundary condition*.

In order to numerically solve (2), a solution that simultaneously satisfies the differential system and the two-point periodic boundary condition has to be computed. A particular technique has been found especially useful for RF circuit simulation: the *harmonic balance* (HB) [6–8] method.

#### **2.4. Harmonic balance**

Harmonic balance (HB) [6–8] handles the circuit, its excitation and its state variables in the frequency-domain. Because of that, it benefits from allowing the direct inclusion of distributed devices (like dispersive transmission lines), or other circuit elements described by frequencydomain measurement data. Frequency-domain methods differ from time-domain steady-state techniques in the way that, instead of representing waveforms as a collection of time samples, they represent those using coefficients of sinusoids in trigonometric series. As a consequence, under moderate nonlinearities, the steady-state solution is typically achieved much more easily in the frequency domain than in the time domain.

In periodic steady-state, any stimulus *bs*(*t*) or state variable *xv*(*t*) can be expressed as a Fourier series

$$b\_s(t) = \sum\_{k=-K}^{\*K} B\_k e^{jk\alpha\_0 t}, \qquad \chi\_v(t) = \sum\_{k=-K}^{\*K} X\_k e^{jk\alpha\_0 t},\tag{3}$$

where *ω*<sup>0</sup> =2*π* / *T* is the fundamental frequency and *K* is the order adopted for a convenient harmonic truncation. The HB method consists in converting the DAE system of (2) into the frequency domain, to obtain the following *n*(2*K*+1) algebraic equations system

$$j\mathbf{\dot{\mathcal{Q}}}\mathbf{\dot{Q}}^{\{r\}} + \mathbf{F}^{\{r\}} + j\mathbf{\dot{\mathcal{K}}}\mathbf{\dot{C}}^{\{r\}}[\mathbf{X}^{\{r+1\}} - \mathbf{X}^{\{r\}}] + \mathbf{G}^{\{r\}}[\mathbf{X}^{\{r+1\}} - \mathbf{X}^{\{r\}}] = \mathbf{B},\tag{4}$$

in which the unknowns are the Fourier coefficients of the state variables of the circuit

$$\mathbf{X} = \left[ \mathbf{X}\_1^{\top}, \mathbf{X}\_2^{\top}, \dots, \mathbf{X}\_n^{\top} \right]^r,\tag{5}$$

where each one of the **X***v*, *v* =1, …, *n*, is a (2*K* + 1)×1 vector containing the Fourier coefficients of the corresponding state variable *xv*(*t*). *F* and Q are vectors containing the Fourier coefficients of *f*(*x*(*t*)) and *q*(*x*(*t*)), respectively, and **G** and **C** denote the *n*(2*K* + 1)×*n*(2*K* + 1) conversion matrices (Toeplitz) [7,9] corresponding to *g(x)*=*df(x)/dx* and *c(x)*=*dq(x)/dx*. *j* is the imaginary unit and Ω is a diagonal matrix defined as

$$\Omega = \text{diag}\left(\underbrace{-Ko\_0, \dots, Ko\_0}\_{\text{v-1}}, \underbrace{-Ko\_0, \dots, Ko\_0}\_{\text{v-2}}, \dots, \underbrace{-Ko\_0, \dots, Ko\_0}\_{\text{v-s}}\right). \tag{6}$$

The system of (4) can be rewritten as

$$
\underbrace{j\,\mathbf{\mathbf{\dot{Q}}\mathbf{Q}^{(\boldsymbol{\epsilon}\boldsymbol{\cdot})} + \mathbf{F}^{(\boldsymbol{\epsilon}\boldsymbol{\cdot})} - \mathbf{B}}\_{\text{H}(\mathbf{x}^{\left[\boldsymbol{\epsilon}\right]})} + \left[\underbrace{j\,\mathbf{\mathbf{\dot{Q}}\mathbf{C}^{\left[\boldsymbol{\epsilon}\right]} + \mathbf{G}^{\left[\boldsymbol{\epsilon}\right]}}\_{\text{J}(\mathbf{x}^{\left[\boldsymbol{\epsilon}\right]})}\right] \left[\mathbf{X}^{\left[\boldsymbol{\epsilon}\text{+1}\right]} - \mathbf{X}^{\left[\boldsymbol{\epsilon}\right]}\right] = 0,\tag{7}
$$

or, in its simplified form, as

$$\left[\mathbf{H}(\mathbf{X}^{\{r\}}) + \frac{d\mathbf{H}(\mathbf{X})}{d\mathbf{X}}\big|\_{\mathbf{x}=\mathbf{x}^{\{r\}}} \left[\mathbf{X}^{\{r+1\}} - \mathbf{X}^{\{r\}}\right] = \mathbf{0},\tag{8}$$

in which

*problems*. Taking into account the formulation of (1), these problems will have here the

In order to numerically solve (2), a solution that simultaneously satisfies the differential system and the two-point periodic boundary condition has to be computed. A particular technique has been found especially useful for RF circuit simulation: the *harmonic balance* (HB) [6–8]

Harmonic balance (HB) [6–8] handles the circuit, its excitation and its state variables in the frequency-domain. Because of that, it benefits from allowing the direct inclusion of distributed devices (like dispersive transmission lines), or other circuit elements described by frequencydomain measurement data. Frequency-domain methods differ from time-domain steady-state techniques in the way that, instead of representing waveforms as a collection of time samples, they represent those using coefficients of sinusoids in trigonometric series. As a consequence, under moderate nonlinearities, the steady-state solution is typically achieved much more

In periodic steady-state, any stimulus *bs*(*t*) or state variable *xv*(*t*) can be expressed as a Fourier

where *ω*<sup>0</sup> =2*π* / *T* is the fundamental frequency and *K* is the order adopted for a convenient harmonic truncation. The HB method consists in converting the DAE system of (2) into the

*jk t jk t*

 w

[ ] [ ] [ ] [ 1] [ ] [ ] [ 1] [ ] [ ] [ ], *r r rr r rr r j j* + + W + +W - + - = **Q F CX X GX X B** (4)

= = å å (3)

1 2 [ , , , ], *T T TT* **X XX X** = ¼ *<sup>n</sup>* (5)

0 0 ( ) , () , *K K*

*sk v k k K k K b t Be x t X e* w

frequency domain, to obtain the following *n*(2*K*+1) algebraic equations system

in which the unknowns are the Fourier coefficients of the state variables of the circuit

+ + =- =-

( ( )) ( ( )) ( ), ( ) ( ), , *<sup>t</sup> t t t t T t tt T*

where the condition *x(t*0*)*=*x(t*0+*T)* is known as the *periodic boundary condition*.

easily in the frequency domain than in the time domain.

00 00

+ = = + ££ + *dq x fx b x x* (2)

following form,

method.

series

**2.4. Harmonic balance**

*dt*

56 Modeling and Simulation in Engineering Sciences

$$\mathbf{H(X)} = j\Omega \mathbf{Q(X)} + \mathbf{F(X)} - \mathbf{B} = 0 \tag{9}$$

is known as the *harmonic balance system*, and the *n*(2*K* + 1)×*n*(2*K* + 1) composite conversion matrix

$$\mathbf{J(X)} = \frac{d\mathbf{H(X)}}{d\mathbf{X}} = j\Omega \mathbf{C(X)} + \mathbf{G(X)}\tag{10}$$

is known as the *Jacobian matrix* of the error function **H**(**X**). This system of (9) is iteratively solved according to (8), until a sufficiently accurate solution **X**[*f*] is achieved, i.e., until

$$\left\|\mathbf{H}(\mathbf{X}^{\{f\}})\right\| = \left\|j\Omega\mathbf{Q}(\mathbf{X}^{\{f\}}) + \mathbf{F}(\mathbf{X}^{\{f\}}) - \mathbf{B}\right\| < \delta,\tag{11}$$

where *δ* is the allowed residual size, and **H**(•) stands for some norm of the error function **H**(•).

#### **2.5. Multivariate formulation**

The multivariate formulation is a powerful strategy that emerged in the late 1990s, playing an important role in RF circuit simulation today. It was first introduced by Brachtendorf et al. [10] as a sophisticated derivation of quasi-periodic harmonic balance, followed by Roychowdhury [11], who demonstrated that the multivariate formulation can be an efficient strategy to analyze circuits running on distinct time scales. The multivariate formulation uses multiple time variables (artificial time scales) to describe the multirate behavior of the circuits. Thus, it is suitable to describe typical multirate regimes of operation present in RF and microwave systems, as is the case of circuits handling amplitude and/or phase-modulated signals, quasiperiodic signals, or any other kind of multirate signals containing a periodic component.

The main achievements of the multivariate formulation are due to the fact that multirate signals can be represented much more efficiently if they are defined as functions of two or more time variables (artificial time scales), i.e., if they are defined as multivariate functions [11–16]. Therefore, as we see in Section 2.5.2, circuits will be no longer described by ordinary differential algebraic equations in the one-dimensional time *t*, but, instead, by partial differential algebraic systems.

#### *2.5.1. Multivariate representations*

The multivariate (multidimensional) strategy is easily illustrated by applying it to a bidimensional problem (two distinct time scales). So, let us consider, for example, an amplitudemodulated RF carrier of the form

$$b(t) = e(t)\cos(2\pi f\_c t),\tag{12}$$

where *e*(*t*) is the envelope (a slowly varying signal), while cos(2*π f <sup>C</sup>t*) is the fast-varying RF carrier. Simulating a circuit with this kind of stimulus, using conventional time-step integra‐ tion schemes (e.g., Runge-Kutta schemes, or linear multistep methods), is computationally very expensive. The main reason is that the solution has to be computed during a long time interval imposed by the slowly varying envelope, whereas the step length is severely con‐ strained by the high-frequency RF carrier.

Consider now the following bidimensional definition for *b*(*t*),

$$
\hat{b}(t\_E, t\_C) = e(t\_E) \cos(2\pi f\_c t\_C),
\tag{13}
$$

where *tE* is the slow envelope time scale and *tC* is the fast carrier time scale. In this particular case, *b* ^ (*tE*, *tC*) is a periodic function with respect to *tC* but not to *tE*, i.e.,

$$
\hat{b}(t\_E, t\_C) = \hat{b}(t\_E, t\_C + T\_C), \quad T\_C = 1 \\
\mid f\_C. \tag{14}
$$

The univariate form, *b*(*t*), with *e*(*t*)=2*e* <sup>−</sup>80×10<sup>6</sup> *t* and *f <sup>C</sup>* =2 GHz, is plotted in **Figure 1** for the [0, 25 ns] time interval. The corresponding bivariate form, *b* ^ (*tE*, *tC*), is depicted in **Figure 2** for the rectangular region [0, 25 ns] × [0, 0.5 ns]. There, it can be appreciated that *b* ^ (*tE*, *tC*) does not have as many undulations as *b*(*t*), thus allowing a more compact representation with fewer samples. Furthermore, due to the periodicity of *b* ^ (*tE*, *tC*) in *tC*, we know that its plot repeats over the rest of this time axis. Thus, the bivariate form plotted in **Figure 2** contains all the information necessary to recover the original univariate form depicted in **Figure 1**.

**Figure 1.** Envelope-modulated signal in the univariate time.

where *δ* is the allowed residual size, and **H**(•) stands for some norm of the error function

The multivariate formulation is a powerful strategy that emerged in the late 1990s, playing an important role in RF circuit simulation today. It was first introduced by Brachtendorf et al. [10] as a sophisticated derivation of quasi-periodic harmonic balance, followed by Roychowdhury [11], who demonstrated that the multivariate formulation can be an efficient strategy to analyze circuits running on distinct time scales. The multivariate formulation uses multiple time variables (artificial time scales) to describe the multirate behavior of the circuits. Thus, it is suitable to describe typical multirate regimes of operation present in RF and microwave systems, as is the case of circuits handling amplitude and/or phase-modulated signals, quasiperiodic signals, or any other kind of multirate signals containing a periodic component.

The main achievements of the multivariate formulation are due to the fact that multirate signals can be represented much more efficiently if they are defined as functions of two or more time variables (artificial time scales), i.e., if they are defined as multivariate functions [11–16]. Therefore, as we see in Section 2.5.2, circuits will be no longer described by ordinary differential algebraic equations in the one-dimensional time *t*, but, instead, by partial differential algebraic

The multivariate (multidimensional) strategy is easily illustrated by applying it to a bidimensional problem (two distinct time scales). So, let us consider, for example, an amplitude-

p

where *e*(*t*) is the envelope (a slowly varying signal), while cos(2*π f <sup>C</sup>t*) is the fast-varying RF carrier. Simulating a circuit with this kind of stimulus, using conventional time-step integra‐ tion schemes (e.g., Runge-Kutta schemes, or linear multistep methods), is computationally very expensive. The main reason is that the solution has to be computed during a long time interval imposed by the slowly varying envelope, whereas the step length is severely con‐

(12)

( ) ( )cos(2 ), *<sup>C</sup> bt et ft* =

( , ) ( )cos(2 ), *EC E C C bt t et ft* =

p

\$ (13)

**H**(•).

systems.

**2.5. Multivariate formulation**

58 Modeling and Simulation in Engineering Sciences

*2.5.1. Multivariate representations*

modulated RF carrier of the form

strained by the high-frequency RF carrier.

Consider now the following bidimensional definition for *b*(*t*),

**Figure 2.** Bivariate representation of the envelope-modulated signal.

#### *2.5.2. Multirate partial differential algebraic equations' systems*

Let us consider a general nonlinear RF circuit described by the differential algebraic equations' system of (1), and let us suppose that this circuit is driven by the envelope-modulated signal of (12). Considering the above stated, we are able to reformulate the excitation *b*(*t*) and the state variables *x*(*t*) vectors as bidimensional entities, in which *t* is replaced by *tE* for the slowly varying parts (the envelope time scale) and by *tC* for the fast-varying parts (the RF carrier time scale). This bidimensional formulation converts the DAE system of (1) into the following *multirate partial differential algebraic equations'* (MPDAE) system [11]:

$$\frac{\partial \widehat{\mathbf{q}}(\widehat{\mathbf{x}}(t\_E, t\_C))}{\partial t\_E} + \frac{\partial \widehat{\mathbf{q}}(\widehat{\mathbf{x}}(t\_E, t\_C))}{\partial t\_C} + \mathbf{f}(\widehat{\mathbf{x}}(t\_E, t\_C)) = \widehat{\mathbf{b}}(t\_E, t\_C). \tag{15}$$

The mathematical relation between (1) and (15) establishes that if *b* ^ (*tE*, *tC*) and *x* ^(*tE*, *tC*) satisfy (15), then the univariate forms *b*(*t*)= *b* ^ (*t*, *t*) and *x*(*t*)= *x* ^(*t*, *<sup>t</sup>*) satisfy (1) [11]. Therefore, univariate solutions of (1) are available on diagonal lines *tE* =*t*, *tC* =*t*, along the bivariate solutions of (15), i.e., *x*(*t*) may be retrieved from its bivariate form *x* ^(*tE*, *tC*), by simply setting *tE* = *tC* = *t*. Consequently, if one wants to obtain the univariate solution in the generic [0,*tFinal*] interval, due to the periodicity of the problem in the *tC* dimension we will have

$$\mathbf{x}(t) = \hat{\mathbf{x}}(t, t \,\mathrm{mod}\,T\_c) \tag{16}$$

on the rectangular domain 0, *tFinal* × 0, *TC* , where *t* mod *TC* represents the remainder of division of *t* by *TC*. The main advantage of this MPDAE approach is that it can result in significant improvements in simulation speed when compared to DAE-based alternatives.

#### *2.5.3. Initial and boundary conditions for envelope-modulated regimes*

Dynamical behavior of RF circuits driven by stimuli of the form of (12) can be described by the MPDAE system of (15) together with a set of initial and periodic boundary conditions. In fact, bivariate forms of the circuits' state variables can be achieved by computing the solution of the following initial periodic-boundary value problem

$$\begin{aligned} \frac{\partial \mathbf{q}\left(\hat{\mathbf{x}}\left(t\_{\varepsilon}, t\_{\circ}\right)\right)}{\partial t\_{\varepsilon}} + \frac{\partial \mathbf{q}\left(\hat{\mathbf{x}}\left(t\_{\varepsilon'}, t\_{\circ}\right)\right)}{\partial t\_{\circ}} + \mathbf{f}\left(\hat{\mathbf{x}}\left(t\_{\varepsilon'}, t\_{\circ}\right)\right) = \hat{\mathbf{b}}\left(t\_{\varepsilon'}, t\_{\circ}\right), \\ \hat{\mathbf{x}}\left(0, t\_{\circ}\right) = \mathbf{i}\left(t\_{\circ}\right), \qquad \hat{\mathbf{x}}\left(t\_{\varepsilon'}, 0\right) = \hat{\mathbf{x}}\left(t\_{\varepsilon'}, T\_{\circ}\right), \end{aligned} \tag{17}$$

on the rectangle 0, *tFinal* × 0, *TC* . *i*(·) is a given initial-condition function defined on [0,*TC*], satisfying *i(0)*=*i(TC)=x(0)*, and **x** ^*(tE0)*=**<sup>x</sup>** ^*(tE,TC)* is the periodic boundary condition due to the periodicity of the problem in the *tc* fast carrier time scale.

The reason why bivariate envelope-modulated solutions do not need to be evaluated on the entire 0, *tFinal* × 0, *tFinal* domain (which would be computationally very expensive and would turn the multivariate strategy useless), and are restricted to the rectangle 0, *tFinal* × 0, *TC* , is because the solutions repeat along the *tC* time axis. The way how univariate solutions are recovered from their multivariate forms was already defined above by (16).
