Author details

Steven H. Voldman Address all correspondence to: voldman@ieee.org IEEE Fellow, United States of America

### References


[24] Voldman S, Strang S, Jordan D. An automated electrostatic discharge computer-aided design (CAD) system with the incorporation of hierarchical parameterized cells in BiCMOS analog and RF technology for mixed signal applications. In: Proceedings of the Electrical Overstress/Electrostatic Discharge (EOS/ESD) Symposium, October 2002; pp.

Electrostatic Discharge Protection and Latch-Up Design and Methodologies for ASIC Development

http://dx.doi.org/10.5772/intechopen.81033

63

[25] Perez C, Voldman S. Method of forming a guard ring parameterized cell structure in a hierarchical parameterized cell design, checking and verification system. U.S. Patent

[26] Chapman P, Collins D, Voldman S. Design methodology of guard ring design resistance optimization for latchup prevention. U.S. Patent No. 7,549,135, June 16th, 2009

[27] Watson A, Voldman S. Methodology of quantification of transmission probability for minority carrier collection in a semiconductor chip. U.S. Patent No. 7,200,825, April 3rd,

[28] Perez C, Voldman S. Method of displaying a guard ring within an integrated circuit. U.S.

[29] Yuan F. CMOS Current-Mode Circuits for Data Communication. New York: Springer;

[30] Semenov O, Sarbishaei H, Sachdev M. ESD Protection Device and Circuit Design for

[31] Russ C. ESD issues in advanced CMOS bulk and FinFET technologies: Processing, protection devices and circuit strategies. Microelectronics and Reliability;208:1403-1411

[32] Galy P, Bourgeat J, Guitard N, Lise JD. Ultracompact ESD protection with BIMOS-merged dual back-to-back SCR in hybrid bulk 28-nm FD-SOI advanced CMOS technology. IEEE

[33] Dai CT, Chen SH, Linten D, et al. Latchup in bulk FinFET technology. IEEE Transactions of

296-305

2007

2007

Application 20040268284, December 30, 2004

Patent No. 7,350,160, March 25th, 2008

Transactions on Electron Devices. 2017

Electron Devices. 2017

Advanced CMOS Technologies. New York: Springer; 2008


[24] Voldman S, Strang S, Jordan D. An automated electrostatic discharge computer-aided design (CAD) system with the incorporation of hierarchical parameterized cells in BiCMOS analog and RF technology for mixed signal applications. In: Proceedings of the Electrical Overstress/Electrostatic Discharge (EOS/ESD) Symposium, October 2002; pp. 296-305

[5] Voldman S. Latchup. Chichester: Wiley; 2006

Patent No. 5,610,791, March 11th, 1997

(CICC). 1999:347-350

62 Digital Systems

pp. 182-186

Symposium, October 2005; pp. 131-140

technology. Journal of Electrostatics. 2006;64:730-743

[6] Voldman S. ESD Testing: From Components to Systems. Chichester: Wiley; 2017

[9] Voldman S. Power sequence independent electrostatic discharge protection circuits. U.S.

[10] Panner J, Bednar T, Buffet P, Kemerer D, Stout D, Zuchowski P. The first copper ASICs: A 12M-gate technology. Proceedings of the IEEE Custom Integrated Circuits Conference

[11] Kaeslin H. Digital Integrated Circuit Design. England: Cambridge University Press; 2012

[12] Wakerly JF. Digital Design Principles and Practices. New York: Prentice-Hall; 1990

[15] Eshraghian K, Weste N. Principles of VLSI Design. New York: Addison-Wesley; 1988

[16] Pequignot J, Rahman T, Sloan J, Stout D, Voldman S. Method and apparatus for providing

[17] Pequignot J, Rahman T, Sloan J, Stout D, Voldman S. Method for providing ESD protection

[18] Pequignot J, Rahman T, Sloan J, Stout D, Voldman S. ASIC book to provide ESD protection

[19] Brennan C, Sloan J, Picozzi D. CDM failure modes in 130 nm ASIC technology. In: Proceedings of the Electrical Overstress/Electrostatic Discharge (EOS/ESD) Symposium, 2004;

[20] Pequignot J, Sloan J, Stout D, Voldman S. Electrostatic discharge protection networks for

[21] Voldman S, Perez C, Watson A. Guard rings: Theory, experimental quantification, and design. In: Proceedings of the Electrical Overstress/Electrostatic Discharge (EOS/ESD)

[22] Voldman S, Perez C, Watson A. Guard rings: Structures, design methodology, integration, experimental results, and analysis for RF CMOS and RF mixed signal silicon germanium

[23] Voldman S, Strang S, Jordan, D. A design system for auto-generation of ESD circuits. In: Proceedings of the International Cadence Users Group (ICUG), September 2002

[7] Voldman S. ESD: Design and Synthesis. Chichester: Wiley; 2011

[13] Weste N, Harris D. CMOS VLSI Design. New York: Pearson; 2013

ESD protection. U.S. Patent No. 6,157,530, December 5th, 2000

for an integrated circuits. U.S. Patent No. 6,262,873, July 17th, 2001

on an integrated circuit. U.S. Patent No. 6,292,343, September 18th, 2001

triple well semiconductor devices. U.S. Patent Application, July 15, 2004

[14] Allen PE, D R H. CMOS Analog Circuit Design. OUP; 2012

[8] Voldman S. ESD: Analog Design. Chichester: Wiley; 2014


**Chapter 4**

**Provisional chapter**

**Design of Controller for Brushless Direct Current**

**Design of Controller for Brushless Direct Current** 

DOI: 10.5772/intechopen.79873

Brushless direct current (BLDC) motors are the pillar of advanced controllers. This chapter presents a portion of the central thoughts hidden plan of FPGA based BLDC motor controller. It covers a considerable amount of ground, yet at a genuinely essential level to make central ideas clear. This chapter gives a great strategy which is useful to aid the outline and control of financially savvy, productive brushless direct current (BLDC) motors. Speed Control of BLDC motor utilizing PIC microcontrollers requires more equipment, and with the accessibility of FPGA adaptable highlights inspired to build up a financially savvy and dependable control with variable speed go. In this chapter, utilizing an algorithm which utilizes the Resolver signals caught from the motor is created with the assistance of Resolver to Digital converters. The VHDL program produces the terminating beats required to drive the MOSFETs of three stage completely controlled scaffold converter driven by drivers. The provided outline procedure is observed to be

Brushless direct current (BLDC) motor controllers have received considerable attention in the past few years. The desirable features of brushed DC torque motors like torque-speed characteristics, accurate speed control are maintained in the BLDC motor approach, the problems posed by brush DC motors like arcing, which cause high EMI and frequent changes of

> © 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

**Motors Using FPGA**

**Motors Using FPGA**

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.79873

Suneeta Harlapur

**Abstract**

great and proficient.

**1. Introduction**

**Keywords:** BLDC, FPGA, MOSFET, RDC, VHDL

brushes and commutators have been eliminated or minimized.

Suneeta Harlapur

#### **Design of Controller for Brushless Direct Current Motors Using FPGA Design of Controller for Brushless Direct Current Motors Using FPGA**

DOI: 10.5772/intechopen.79873

Suneeta Harlapur Suneeta Harlapur

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.79873

#### **Abstract**

Brushless direct current (BLDC) motors are the pillar of advanced controllers. This chapter presents a portion of the central thoughts hidden plan of FPGA based BLDC motor controller. It covers a considerable amount of ground, yet at a genuinely essential level to make central ideas clear. This chapter gives a great strategy which is useful to aid the outline and control of financially savvy, productive brushless direct current (BLDC) motors. Speed Control of BLDC motor utilizing PIC microcontrollers requires more equipment, and with the accessibility of FPGA adaptable highlights inspired to build up a financially savvy and dependable control with variable speed go. In this chapter, utilizing an algorithm which utilizes the Resolver signals caught from the motor is created with the assistance of Resolver to Digital converters. The VHDL program produces the terminating beats required to drive the MOSFETs of three stage completely controlled scaffold converter driven by drivers. The provided outline procedure is observed to be great and proficient.

**Keywords:** BLDC, FPGA, MOSFET, RDC, VHDL

### **1. Introduction**

Brushless direct current (BLDC) motor controllers have received considerable attention in the past few years. The desirable features of brushed DC torque motors like torque-speed characteristics, accurate speed control are maintained in the BLDC motor approach, the problems posed by brush DC motors like arcing, which cause high EMI and frequent changes of brushes and commutators have been eliminated or minimized.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **1.1. History of brushless DC motor**

Most punctual confirmation of brushless DC motor was in 1962, directly after Wilson and Trickey shaped a "DC Machine with Solid State Commutation". It was in this way created owing to summit torque, summit reaction drive for claim to fame dedications, for example, tape and circle drives for PCs, mechanical autonomy and situating frameworks and in flying machine where brush wear was grievous because of low stickiness. In conjunction with approach denoting equivalence capable and changeless magnet stuffs with high power, high voltage transistors in the ahead of schedule to mid-1980s the capacity to create such a motor reasonable turned into a realism.

and giving the correct recompense grouping to stator windings. These position sensors can be Hall sensors, Resolvers, or Absolute position optical encoders. Though sensorless BLDC motor control is feasible using back-EMFs, they have some disadvantages. But still, sensorless

Design of Controller for Brushless Direct Current Motors Using FPGA

http://dx.doi.org/10.5772/intechopen.79873

67

With a specific end goal to make the motor pivot, the curls are invigorated in a pre-characterized succession, making the motor turn in one course. Running the grouping in the converse request makes the motor keep running the other way. The course of the current decides the introduc-

The attractive field pulls in and repulses the changeless magnet rotor. By changing the present stream in the curls and in this way the extremity of the attractive fields at the right minute and in the right succession, the motor turns. Rotation of the current through the stator curls

A three-phase BLDC motor has six steps of commutation. In six-step commutation, only two out of the three BLDC motor windings are used at a time, as shown in **Figure 1** using a three-phase

control of BLDC motor has been receiving great interest.

tion of the attractive field created by the loop.

**1.3. Electronic commutation**

is alluded to as 'commutation'.

half-bridge inverter arrangement.

**Figure 1.** Six stages of commutation.

Impressive primary substantial brushless DC motors of 50 hp. were composed at POWERTEC Industrial Corporation in the late 1980s by Robert E. Lordo. Today, the greater part of the significant motor makers makes brushless DC motors. Brushless DC drives take a shot at the same standard as all DC motors yet the motor is worked "back to front" along with the fields on top of pole of the motor and its "armature" all things considered. The fields turn and effective "armature" stays stationary.

Keeping in mind the end goal to copy the activity of the commutator, an encoder was mounted in contact with the pole of the motor to intellect the position of the fields on the pole. The controller "meets" the attractive position data and decides through the basic rationale of which motor lead ought to have current setting off to a winding and which motor lead ought to give back the current from the winding.

### **1.2. Construction and operation of the BLDC motor**

A BLDC motor comprises of a stator made out of covered steel stacked up to convey the windings. The brushless motors are for the most part controlled by utilizing a three stage power semiconductor span. In numerous motors, the essential quantities of loops are imitated to have littler introduction steps and littler torque swells.

A BLDC motor configured in a star pattern with three coils is considered here. The rotor in a typical BLDC motor is made out of permanent magnets. Increasing the number of poles does give better torque at the cost of reduced maximum possible speed [1, 2]. The motor requires a rotor position sensor for beginning and giving legitimate substitution succession to turn on the force gadgets in the inverter span. In light of the rotor position, the force gadgets are commutated successively for each 60°.

The replacement succession for BLDC motors has three windings. The first is empowered to positive force (current goes into the winding), the second twisting is for negative force (current ways out from the winding) and the third one is in a non-invigorated condition. The cooperation between the attractive field produced by the stator loops and the lasting magnets makes the required torque.

The BLDC motor drive framework comprises of a DC power supply changed on to the stator stage windings of the motor through an inverter by force exchanging gadgets. The discovery of rotor position decides the exchanging arrangement of the inverter. Three-stage inverters are for the most part used to control these motors, requiring a rotor position sensor for beginning and giving the correct recompense grouping to stator windings. These position sensors can be Hall sensors, Resolvers, or Absolute position optical encoders. Though sensorless BLDC motor control is feasible using back-EMFs, they have some disadvantages. But still, sensorless control of BLDC motor has been receiving great interest.

#### **1.3. Electronic commutation**

**1.1. History of brushless DC motor**

66 Digital Systems

reasonable turned into a realism.

tive "armature" stays stationary.

back the current from the winding.

commutated successively for each 60°.

makes the required torque.

**1.2. Construction and operation of the BLDC motor**

have littler introduction steps and littler torque swells.

Most punctual confirmation of brushless DC motor was in 1962, directly after Wilson and Trickey shaped a "DC Machine with Solid State Commutation". It was in this way created owing to summit torque, summit reaction drive for claim to fame dedications, for example, tape and circle drives for PCs, mechanical autonomy and situating frameworks and in flying machine where brush wear was grievous because of low stickiness. In conjunction with approach denoting equivalence capable and changeless magnet stuffs with high power, high voltage transistors in the ahead of schedule to mid-1980s the capacity to create such a motor

Impressive primary substantial brushless DC motors of 50 hp. were composed at POWERTEC Industrial Corporation in the late 1980s by Robert E. Lordo. Today, the greater part of the significant motor makers makes brushless DC motors. Brushless DC drives take a shot at the same standard as all DC motors yet the motor is worked "back to front" along with the fields on top of pole of the motor and its "armature" all things considered. The fields turn and effec-

Keeping in mind the end goal to copy the activity of the commutator, an encoder was mounted in contact with the pole of the motor to intellect the position of the fields on the pole. The controller "meets" the attractive position data and decides through the basic rationale of which motor lead ought to have current setting off to a winding and which motor lead ought to give

A BLDC motor comprises of a stator made out of covered steel stacked up to convey the windings. The brushless motors are for the most part controlled by utilizing a three stage power semiconductor span. In numerous motors, the essential quantities of loops are imitated to

A BLDC motor configured in a star pattern with three coils is considered here. The rotor in a typical BLDC motor is made out of permanent magnets. Increasing the number of poles does give better torque at the cost of reduced maximum possible speed [1, 2]. The motor requires a rotor position sensor for beginning and giving legitimate substitution succession to turn on the force gadgets in the inverter span. In light of the rotor position, the force gadgets are

The replacement succession for BLDC motors has three windings. The first is empowered to positive force (current goes into the winding), the second twisting is for negative force (current ways out from the winding) and the third one is in a non-invigorated condition. The cooperation between the attractive field produced by the stator loops and the lasting magnets

The BLDC motor drive framework comprises of a DC power supply changed on to the stator stage windings of the motor through an inverter by force exchanging gadgets. The discovery of rotor position decides the exchanging arrangement of the inverter. Three-stage inverters are for the most part used to control these motors, requiring a rotor position sensor for beginning With a specific end goal to make the motor pivot, the curls are invigorated in a pre-characterized succession, making the motor turn in one course. Running the grouping in the converse request makes the motor keep running the other way. The course of the current decides the introduction of the attractive field created by the loop.

The attractive field pulls in and repulses the changeless magnet rotor. By changing the present stream in the curls and in this way the extremity of the attractive fields at the right minute and in the right succession, the motor turns. Rotation of the current through the stator curls is alluded to as 'commutation'.

A three-phase BLDC motor has six steps of commutation. In six-step commutation, only two out of the three BLDC motor windings are used at a time, as shown in **Figure 1** using a three-phase half-bridge inverter arrangement.

**Figure 1.** Six stages of commutation.

Steps are equivalent to 60 electrical degrees, and so, six steps make a full, 360° rotation. When each of the six states in the recompense arrangement has been executed, the grouping is rehashed to proceed with the revolution of the motor. This succession speaks to a full electrical turn. For motors with numerous post matches, the electrical revolution does not relate to mechanical turn.

In a BLDC motor, the commutation is achieved using feedback sensors. Hall Effect Sensors, Resolvers and Optical encoders are commonly used feedback sensors. In this research work, a resolver fitted to the motor shaft has been used as the feedback device, whose two signals are converted to a precise shaft position, using a resolver to digital converter (AD2S83) with a resolution of 12-bits.

#### *1.3.1. Three phase inverter*

The BLDC motor control comprises of creating DC streams in the motor stages. This control is subdivided into two free operations: in the first place, stator and rotor flux synchronization, and after that control of the present worth. Both operations are acknowledged through the three-stage inverter portrayed in the accompanying plan. The flux synchronization has been gotten from the position data originating from resolver. From the position, the controller characterizes the proper pair of MOSFET, which must be driven. The direction of the current to a settled 60° reference can be figured it out as shown in **Table 1** and circuit shown in **Figure 2** respectively.

#### *1.3.2. Resolvers*

Resolvers are transducers that convert the angular position and/or angular velocity of a rotating shaft to an electrical signal. They deliver signals proportional to the sine and cosine of the shaft angle. When the rotor is excited with a reference voltage of the form A sin(ωt), the voltages induced across the two stator windings are of the form:

$$\mathbf{S1} - \mathbf{S2} = A \sin(\omega t) \sin(\mathbf{\dot{q}}) \tag{1}$$

*1.3.4. Field programmable gate array (FPGA)*

**Table 1.** Sector degree versus coil excitation.

out disturbing the hardware setup.

and stage moving clock signals.

**Figure 2.** Three phase inverter and stator coil excitation.

The Spartan-3 FPGA [4] with advanced process technology delivers more functionality in BLDC motor controller. The Spartan-3 family is a superior alternative to mask programmed ASICs and avoids the high initial cost, lengthy development cycles, and the inherent inflexibility of conventional ASICs. FPGA programmability permits modifications in the field with-

Design of Controller for Brushless Direct Current Motors Using FPGA

http://dx.doi.org/10.5772/intechopen.79873

69

**Sectors degree Coil excitation MOSFET ON** 0–60° W-V T5, T4 60–120° W-U T5, T2 120–180° V-U T3, T2 180–240° V-W T3, T6 240–300° U-W T1, T6 300–360° U-V T1, T4

The Spartan-3 XC3S400 gadget comprises of 896 Configurable Logic Blocks (CLBs) contains RAM-based Look-Up Tables (LUTs) to actualize rationale and capacity components so that there will be no need of outer memory. Info/yield Blocks (IOBs) control the stream of information between the 116 I/O sets. Computerized Clock Manager (DCM) squares give selfaligning, completely advanced answers for conveying, deferring, duplicating, partitioning,

$$\text{S2} - \text{S4} = A \sin(\omega t) \cos(\theta) \tag{2}$$

where 'θ' is the shaft angle of the rotor. The two resolver signal outputs form the input to a Resolver to Digital Converter (RDC), which digitizes the shaft angle information into a digital format, for further processing by FPGA for the electronic commutation.

#### *1.3.3. Resolver to digital converter*

The resolver mounted on the motor shaft takes a shot at the transformer standard. The essential twisting is on the resolver's rotor and relying upon its pole edge, the prompted voltage in the two auxiliary windings are moved by 90°. The position information is obtained in a digital format using an Analog Devices Resolver to Digital Converter (RDC) [3]. The RDC also provides velocity signal in analog form with a 32.5 rps/V dc.


**Table 1.** Sector degree versus coil excitation.

Steps are equivalent to 60 electrical degrees, and so, six steps make a full, 360° rotation. When each of the six states in the recompense arrangement has been executed, the grouping is rehashed to proceed with the revolution of the motor. This succession speaks to a full electrical turn. For motors with numerous post matches, the electrical revolution does not relate to

In a BLDC motor, the commutation is achieved using feedback sensors. Hall Effect Sensors, Resolvers and Optical encoders are commonly used feedback sensors. In this research work, a resolver fitted to the motor shaft has been used as the feedback device, whose two signals are converted to a precise shaft position, using a resolver to digital converter (AD2S83) with

The BLDC motor control comprises of creating DC streams in the motor stages. This control is subdivided into two free operations: in the first place, stator and rotor flux synchronization, and after that control of the present worth. Both operations are acknowledged through the three-stage inverter portrayed in the accompanying plan. The flux synchronization has been gotten from the position data originating from resolver. From the position, the controller characterizes the proper pair of MOSFET, which must be driven. The direction of the current to a settled 60° reference can be figured it out as shown in **Table 1** and circuit shown in **Figure 2**

Resolvers are transducers that convert the angular position and/or angular velocity of a rotating shaft to an electrical signal. They deliver signals proportional to the sine and cosine of the shaft angle. When the rotor is excited with a reference voltage of the form A sin(ωt), the

*<sup>S</sup>*<sup>1</sup> <sup>−</sup> *<sup>S</sup>*<sup>2</sup> <sup>=</sup> *<sup>A</sup>* sin(ωt) sin(θ) (1)

*<sup>S</sup>*<sup>2</sup> <sup>−</sup> *<sup>S</sup>*<sup>4</sup> <sup>=</sup> *<sup>A</sup>* sin(ωt) cos(θ) (2)

where 'θ' is the shaft angle of the rotor. The two resolver signal outputs form the input to a Resolver to Digital Converter (RDC), which digitizes the shaft angle information into a digital

The resolver mounted on the motor shaft takes a shot at the transformer standard. The essential twisting is on the resolver's rotor and relying upon its pole edge, the prompted voltage in the two auxiliary windings are moved by 90°. The position information is obtained in a digital format using an Analog Devices Resolver to Digital Converter (RDC) [3]. The RDC also

voltages induced across the two stator windings are of the form:

format, for further processing by FPGA for the electronic commutation.

provides velocity signal in analog form with a 32.5 rps/V dc.

mechanical turn.

68 Digital Systems

a resolution of 12-bits.

*1.3.1. Three phase inverter*

respectively.

*1.3.2. Resolvers*

*1.3.3. Resolver to digital converter*

#### *1.3.4. Field programmable gate array (FPGA)*

The Spartan-3 FPGA [4] with advanced process technology delivers more functionality in BLDC motor controller. The Spartan-3 family is a superior alternative to mask programmed ASICs and avoids the high initial cost, lengthy development cycles, and the inherent inflexibility of conventional ASICs. FPGA programmability permits modifications in the field without disturbing the hardware setup.

The Spartan-3 XC3S400 gadget comprises of 896 Configurable Logic Blocks (CLBs) contains RAM-based Look-Up Tables (LUTs) to actualize rationale and capacity components so that there will be no need of outer memory. Info/yield Blocks (IOBs) control the stream of information between the 116 I/O sets. Computerized Clock Manager (DCM) squares give selfaligning, completely advanced answers for conveying, deferring, duplicating, partitioning, and stage moving clock signals.

**Figure 2.** Three phase inverter and stator coil excitation.

### **1.4. Implementation of BLDC motor controller on FPGA**

The Spartan-3 XC3S400 FPGA has a very good alternative to mask programmed ASICs and avoids the high cost and lengthy development process of BLDC motor controller.

### *1.4.1. Implementation of open loop BLDC motor controller on FPGA*

The FPGA works like a controller to read the information from resolver to digital converter and to perform suitable electronic commutation. The controller is implemented for the constant speed by controlling the width of the PWM signal. The scheme of FPGA role in open loop BLDC motor controller is shown in **Figure 3**.

### *1.4.2. Implementation of closed loop BLDC motor controller on FPGA*

The FPGA forms a controller to read-in resolver to digital converter, perform electronic commutation by reading the servo error from the analog to digital converter.

The speed control function is implemented by controlling the width of the PWM gated pulses. This plan of FPGA part in BLDC motor speed controller is appeared in **Figure 4**.

### *1.4.2.1. Speed controller of BLDC motor*

The variable velocity control of a BLDC motor is acquired by utilizing inverter yield which has a variable recurrence and variable voltage source. The speed of the motor is related to the number of poles and frequency of the supplied voltage as below:

$$N = 120 \text{f/P} \tag{3}$$

stage of the six step commutation is 3.33 ms. Thus, the set frequency is configured according

Design of Controller for Brushless Direct Current Motors Using FPGA

http://dx.doi.org/10.5772/intechopen.79873

71

The variable voltage is obtained, using PWM technique by modifying the width of the pulses. This variable voltage sends variable current to the stator coils based on the required torque

In this work, a hybrid approach has been selected for the BLDC motor speed controller. The speed and the current loops have been implemented by using an operational amplifier. The digitized error is read by using the FPGA to compute the pulse width of the waveform to be sent to the gate control of MOSFETs. A closed loop speed controller requires a reference speed to follow. The motor speed is fed back to determine the error between the reference speed and motor speed. This error in speed is amplified and fed to a current loop where the actual motor current measured with LEM sensor is compared for the determination of the torque error. This error is amplified and fed to a 8-bit Analog to Digital Converter (Model ADC.0800). This digitized error is fed to the FPGA to determine the PWM width so as to control the stator voltage and current to the stator coils. This speed-controller scheme is

The generated gated signals are passed on through opto-isolators to the MOSFET gate drivers. The three-phase full bridge circuit drives the motor. This BLDC motor has a resolver mounted on its rotor shaft to provide its angular position. This motor delivers a rated torque

to the rpm required.

**Figure 4.** FPGA as closed loop BLDC motor controller.

shown in **Figure 5**.

of 0.41 Nm at the rated speed of 7000 rpm.

**Figure 5.** Block diagram of speed control of a BLDC motor.

of the load.

where N—speed in rpm, P—number of poles and f—frequency of the supply.

The selected BLDC motor for this work has six numbers of poles and tested for 1000 rpm speed with a 50 Hz power supply. The period of this supply is 20 ms and the duration of each

**Figure 3.** FPGA as open loop BLDC motor controller.

**Figure 4.** FPGA as closed loop BLDC motor controller.

**1.4. Implementation of BLDC motor controller on FPGA**

70 Digital Systems

*1.4.1. Implementation of open loop BLDC motor controller on FPGA*

*1.4.2. Implementation of closed loop BLDC motor controller on FPGA*

number of poles and frequency of the supplied voltage as below:

mutation by reading the servo error from the analog to digital converter.

This plan of FPGA part in BLDC motor speed controller is appeared in **Figure 4**.

where N—speed in rpm, P—number of poles and f—frequency of the supply.

loop BLDC motor controller is shown in **Figure 3**.

*1.4.2.1. Speed controller of BLDC motor*

**Figure 3.** FPGA as open loop BLDC motor controller.

The Spartan-3 XC3S400 FPGA has a very good alternative to mask programmed ASICs and

The FPGA works like a controller to read the information from resolver to digital converter and to perform suitable electronic commutation. The controller is implemented for the constant speed by controlling the width of the PWM signal. The scheme of FPGA role in open

The FPGA forms a controller to read-in resolver to digital converter, perform electronic com-

The speed control function is implemented by controlling the width of the PWM gated pulses.

The variable velocity control of a BLDC motor is acquired by utilizing inverter yield which has a variable recurrence and variable voltage source. The speed of the motor is related to the

*N* = 120*f*/*P* (3)

The selected BLDC motor for this work has six numbers of poles and tested for 1000 rpm speed with a 50 Hz power supply. The period of this supply is 20 ms and the duration of each

avoids the high cost and lengthy development process of BLDC motor controller.

stage of the six step commutation is 3.33 ms. Thus, the set frequency is configured according to the rpm required.

The variable voltage is obtained, using PWM technique by modifying the width of the pulses. This variable voltage sends variable current to the stator coils based on the required torque of the load.

In this work, a hybrid approach has been selected for the BLDC motor speed controller. The speed and the current loops have been implemented by using an operational amplifier. The digitized error is read by using the FPGA to compute the pulse width of the waveform to be sent to the gate control of MOSFETs. A closed loop speed controller requires a reference speed to follow. The motor speed is fed back to determine the error between the reference speed and motor speed. This error in speed is amplified and fed to a current loop where the actual motor current measured with LEM sensor is compared for the determination of the torque error. This error is amplified and fed to a 8-bit Analog to Digital Converter (Model ADC.0800). This digitized error is fed to the FPGA to determine the PWM width so as to control the stator voltage and current to the stator coils. This speed-controller scheme is shown in **Figure 5**.

The generated gated signals are passed on through opto-isolators to the MOSFET gate drivers. The three-phase full bridge circuit drives the motor. This BLDC motor has a resolver mounted on its rotor shaft to provide its angular position. This motor delivers a rated torque of 0.41 Nm at the rated speed of 7000 rpm.

**Figure 5.** Block diagram of speed control of a BLDC motor.

The gate pulses generated from the FPGA are given to the driver circuit which consists of MOSFET- based inverter bridge. When the motor starts rotating, the coils are energized correspondent with the sequence.

**Chapter 5**

Provisional chapter

**Functional Verification of Digital Systems Using Meta-**

DOI: 10.5772/intechopen.80048

Trends in technological developments, such as autonomous vehicles, home automation, connected cars, IoT, etc., are based on integrated systems or application-specific integrated circuits with high capacities, where these systems require even more complex devices. Thus, new techniques to design more secure systems in a short time in the market are needed. At this point, verification is one of the highest costs in the manufacturing stage and most expensive in the design process. To reduce the time and cost of the verification process, artificial intelligence techniques based on the optimization of the coverage of behavioral areas have been proposed. In this chapter, we will describe the main techniques used in the functional verification of digital systems of medium complexity, focusing especially on meta-heuristic algorithms such as particle swarm optimization, genetic algorithms, and so on. Several results are presented and compared, where the opportunity

Keywords: digital systems, meta-heuristics, FPGAs, PSO, genetic algorithms, functional

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

Functional Verification of Digital Systems Using

**Heuristic Algorithms**

Meta-Heuristic Algorithms

Alejandro Medina-Santiago, Kelsey Ramírez-Gutiérrez, Prometeo Cortés-Antonio, Ricardo Barrón-Fernández,

Alejandro Medina-Santiago, Kelsey Ramírez-Gutiérrez, Prometeo Cortés-Antonio, Ricardo Barrón-Fernández,

Alfonso Martínez-Cruz, Ignacio Algredo-Badillo,

Alfonso Martínez-Cruz, Ignacio Algredo-Badillo,

René Cumplido-Parra and Kwang-Ting Cheng

René Cumplido-Parra and Kwang-Ting Cheng

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.80048

Abstract

areas will be described.

coverage, verification, automation

The three phase currents are controlled to incorporate a quasi-square waveform in order to synchronize with back EMF to produce the constant torque. The resolver provides the motor shaft position in terms of sine and cosine waveforms.

The resolver feedback signals are in analog form, which is converted in to digital form with the help of resolver- to- digital converter (RDC). The RDC outputs are fed to the FPGA for further processing. The controller provides two error signals Velocity feedback and Current feedback.

### **2. Summary**

FPGA has been interfaced to a RDC for position feedback information of the motor shaft. The electronic commutation sequence is generated and loaded into the output port to drive the threephase inverter. The speed control is implemented with suitable analog electronics in conjunction with PWM determination, both for duty cycle and frequency by the FPGA. Mathematical modeling of the BLDC motors has been implemented and the MATLAB Simulink simulation is carried out to determine the static and dynamic response of the drive system.

Because of their elite brushless DC motors are increasing wide acknowledgment in telescope drive framework. The velocity control for a brushless DC engine has been outlined and incorporated into a FPGA. Keeping in mind the end goal to control motor torque, current controller is composed and executed in a FPGA SPARTAN-3.

### **Author details**

Suneeta Harlapur

Address all correspondence to: sunitahaveri@gmail.com

Vemana Institute of Technology, Bangalore, India

### **References**


#### **Functional Verification of Digital Systems Using Meta-Heuristic Algorithms** Functional Verification of Digital Systems Using Meta-Heuristic Algorithms

DOI: 10.5772/intechopen.80048

Alfonso Martínez-Cruz, Ignacio Algredo-Badillo, Alejandro Medina-Santiago, Kelsey Ramírez-Gutiérrez, Prometeo Cortés-Antonio, Ricardo Barrón-Fernández, René Cumplido-Parra and Kwang-Ting Cheng Alfonso Martínez-Cruz, Ignacio Algredo-Badillo, Alejandro Medina-Santiago, Kelsey Ramírez-Gutiérrez, Prometeo Cortés-Antonio, Ricardo Barrón-Fernández, René Cumplido-Parra and Kwang-Ting Cheng

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.80048

Abstract

The gate pulses generated from the FPGA are given to the driver circuit which consists of MOSFET- based inverter bridge. When the motor starts rotating, the coils are energized cor-

The three phase currents are controlled to incorporate a quasi-square waveform in order to synchronize with back EMF to produce the constant torque. The resolver provides the motor

The resolver feedback signals are in analog form, which is converted in to digital form with the help of resolver- to- digital converter (RDC). The RDC outputs are fed to the FPGA for further processing. The controller provides two error signals Velocity feedback and Current feedback.

FPGA has been interfaced to a RDC for position feedback information of the motor shaft. The electronic commutation sequence is generated and loaded into the output port to drive the threephase inverter. The speed control is implemented with suitable analog electronics in conjunction with PWM determination, both for duty cycle and frequency by the FPGA. Mathematical modeling of the BLDC motors has been implemented and the MATLAB Simulink simulation is

Because of their elite brushless DC motors are increasing wide acknowledgment in telescope drive framework. The velocity control for a brushless DC engine has been outlined and incorporated into a FPGA. Keeping in mind the end goal to control motor torque, current controller

[1] Gambhir R, Jha AK. Brushless DC motor: Construction and applications. International

[2] Jahns TM, Kliman GB, Neumann TW. Interior permanent magnet synchronous motors for adjustable-speed drives. IEEE Transaction on Industrial Application. 1986;**35**:738-746

carried out to determine the static and dynamic response of the drive system.

respondent with the sequence.

**2. Summary**

72 Digital Systems

**Author details**

Suneeta Harlapur

**References**

shaft position in terms of sine and cosine waveforms.

is composed and executed in a FPGA SPARTAN-3.

Address all correspondence to: sunitahaveri@gmail.com

Journal of Engineering Science. 2013;**2**(5):72-77

Vemana Institute of Technology, Bangalore, India

[3] Analog Devices AD2S83 Data Sheets [4] Xilinx Spartan 3 Family Data Sheets

Trends in technological developments, such as autonomous vehicles, home automation, connected cars, IoT, etc., are based on integrated systems or application-specific integrated circuits with high capacities, where these systems require even more complex devices. Thus, new techniques to design more secure systems in a short time in the market are needed. At this point, verification is one of the highest costs in the manufacturing stage and most expensive in the design process. To reduce the time and cost of the verification process, artificial intelligence techniques based on the optimization of the coverage of behavioral areas have been proposed. In this chapter, we will describe the main techniques used in the functional verification of digital systems of medium complexity, focusing especially on meta-heuristic algorithms such as particle swarm optimization, genetic algorithms, and so on. Several results are presented and compared, where the opportunity areas will be described.

Keywords: digital systems, meta-heuristics, FPGAs, PSO, genetic algorithms, functional coverage, verification, automation

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### 1. Proposed techniques for functional verification of digital systems

Other algorithms have been applied; for example, in [6], the authors used Bayesian networks in a functional verification method, and this type of networks is a model based on probabilistic graphs that are composed of random variables or nodes and edges that represent dependencies between them. The verification has feedback and the ability to cover hard cases and increase the coverage rate of progress, even though a manual configuration for the process was required. Other techniques were proposed for hardware verification based on meta-heuristics [7], where a differential evolution (DE) algorithm is applied; the verification is based on a coverage model

Functional Verification of Digital Systems Using Meta-Heuristic Algorithms

http://dx.doi.org/10.5772/intechopen.80048

75

Works have improved the functional coverage using data mining. In [8], the authors proposed a learning methodology where knowledge from test is extracted. The extracted data is reused to generate tests with similar values to other important ones and cover new assertions. The method is applied to perform a constrained random verification of a processor and reports improvements in assertions coverage through the information extracted in the verification. The authors in [9] proposed an automatic learning method of rules regarding micro-architectural behavior of the instructions, and these rules were embedded in a stimuli generator tool. The method is applied in a microprocessor, improves the quality of the test cases generated and reaches interesting coverage events. In addition, [10] describes a method based on decision trees. In this method, before activating the sentences, they go through an engineering of formal verification to filter the candidate alterations in the output, generating automating RTL sentences. The proposed method was divided into two spaces: static and dynamic techniques. Static analysis techniques were used to direct the data mining process. In addition, Hidden Markov Models (HMM) are statistical methods that use probability measurements for sequen-

Other techniques used in functional verification are based on mutations that are changes of the RTL implementation, and such coverage metrics are used to drive the verification progress during simulation. For example, in [11] the authors proposed a methodology to verify a microprocessor using mutations. To test the vector sequences, the design simulation is performed first, then a set of mutations is added, and the verification is executed. Finally, a comparison of the results is made. One of the problems occurs when a large number of

In this chapter, an alternative hybrid method that uses coverage models is presented. This method represents the device behavior through CoverPoints, and fitness functions focused on sets of specific behavioral regions. In particular, a PSO algorithm with a re-initialization mechanism (BPSOr) is described. The method represents a hybrid technique that uses a simulation tool and meta-heuristic algorithms through a proposed verification interface.

In large-scale electronic integration design, functional verification is the verification process of a design logic that complies with specific rules design for its operation and manufacturing in an integrated circuit. The functional verification answers the question: "Does the proposed

using coverpoints and the algorithm is used to generate test vector sequences.

tial models of the data represented by sequences.

2. Functional verification elements

mutations are added because the verification time is increased.

New applications in different areas, such as the automotive industry, robotics, IoT, and smartphones, among others, require increasingly complex digital devices. This implies the use of new techniques to reduce the design time of the devices, ensuring useful functionality according to the specification. It is important to know that manual simulation and functional verification require much time and expertise, so it has been necessary to develop software tools that improve performance, reduce manufacturing times, decrease verification costs, and increase the confidence level of the RTL implementations. In addition, new systems use a large amount of computational resources and new algorithms that increase the complexity of digital systems and require new methods to analyze and evaluate the device under verification (DUV). Several works of researchers on functional coverage methods have been made. Most studies use the following philosophies: static (methods based on logical or mathematical techniques), dynamic (methods based on simulation), and hybrid methods (combining static and dynamic). Next, works based on meta-heuristic and data mining algorithms report different methods for verification.

To perform verification of digital systems, different approaches have applied heuristic algorithms, for example, genetic algorithms (GA) that apply the evolution theory, where individuals within a population adapt to the conditions to the environment, compete for resources, and generate the evolution of the population through operators such as selection, crossing and mutation. Most of the time, the generation of pseudorandom tests produces worse results than this generation of test sequences. For example, authors in [1] perform a PowerPC architecture verification using genetic algorithms by generating pseudorandom custom instructions and encoding a sequence of instructions with a fixed length. The population size is small to reduce system simulation time. In the same way, in [2] the authors presented an implemented method to generate directed tests through a genetic algorithm, and a cell represents the chromosome in a uniform random distribution in two limits; the different parameters of the method were not fully automated; therefore, extensive knowledge of the evolutionary framework by the user is needed. In addition, in [3], the authors configure a genetic algorithm, which is included in a software platform to improve the functional coverage in a device. In this latter, chromosome coding is based on established instructions, and the proposed method helps to achieve uncovered tasks and increases the hit rate in the test of hard cases, improving the results of the pseudorandom test generation.

Some works have implemented ant colony optimization (ACO) and particle swarm optimization (PSO). On the one hand, the ACO uses the imitation of the behavior of ants seeking better paths from the initial place to the food place; ants get their food by means of pheromones and, in this way, other ants walk the paths and can provide positive feedback. In [4], a method based on the ACO that combines the pseudorandom test generation with a software platform that generates the digital system states is presented. The results show a reduction in computational complexity compared to random generation and other heuristics based on GA. On the other hand, the PSO algorithm is based on the interaction between the particles in the swarm. For instance, the authors in [5] present a verification method by using branches as a coverage metric and a PSO algorithm to perform the validation of RTL implementations.

Other algorithms have been applied; for example, in [6], the authors used Bayesian networks in a functional verification method, and this type of networks is a model based on probabilistic graphs that are composed of random variables or nodes and edges that represent dependencies between them. The verification has feedback and the ability to cover hard cases and increase the coverage rate of progress, even though a manual configuration for the process was required. Other techniques were proposed for hardware verification based on meta-heuristics [7], where a differential evolution (DE) algorithm is applied; the verification is based on a coverage model using coverpoints and the algorithm is used to generate test vector sequences.

Works have improved the functional coverage using data mining. In [8], the authors proposed a learning methodology where knowledge from test is extracted. The extracted data is reused to generate tests with similar values to other important ones and cover new assertions. The method is applied to perform a constrained random verification of a processor and reports improvements in assertions coverage through the information extracted in the verification. The authors in [9] proposed an automatic learning method of rules regarding micro-architectural behavior of the instructions, and these rules were embedded in a stimuli generator tool. The method is applied in a microprocessor, improves the quality of the test cases generated and reaches interesting coverage events. In addition, [10] describes a method based on decision trees. In this method, before activating the sentences, they go through an engineering of formal verification to filter the candidate alterations in the output, generating automating RTL sentences. The proposed method was divided into two spaces: static and dynamic techniques. Static analysis techniques were used to direct the data mining process. In addition, Hidden Markov Models (HMM) are statistical methods that use probability measurements for sequential models of the data represented by sequences.

Other techniques used in functional verification are based on mutations that are changes of the RTL implementation, and such coverage metrics are used to drive the verification progress during simulation. For example, in [11] the authors proposed a methodology to verify a microprocessor using mutations. To test the vector sequences, the design simulation is performed first, then a set of mutations is added, and the verification is executed. Finally, a comparison of the results is made. One of the problems occurs when a large number of mutations are added because the verification time is increased.

In this chapter, an alternative hybrid method that uses coverage models is presented. This method represents the device behavior through CoverPoints, and fitness functions focused on sets of specific behavioral regions. In particular, a PSO algorithm with a re-initialization mechanism (BPSOr) is described. The method represents a hybrid technique that uses a simulation tool and meta-heuristic algorithms through a proposed verification interface.

### 2. Functional verification elements

1. Proposed techniques for functional verification of digital systems

ent methods for verification.

74 Digital Systems

pseudorandom test generation.

New applications in different areas, such as the automotive industry, robotics, IoT, and smartphones, among others, require increasingly complex digital devices. This implies the use of new techniques to reduce the design time of the devices, ensuring useful functionality according to the specification. It is important to know that manual simulation and functional verification require much time and expertise, so it has been necessary to develop software tools that improve performance, reduce manufacturing times, decrease verification costs, and increase the confidence level of the RTL implementations. In addition, new systems use a large amount of computational resources and new algorithms that increase the complexity of digital systems and require new methods to analyze and evaluate the device under verification (DUV). Several works of researchers on functional coverage methods have been made. Most studies use the following philosophies: static (methods based on logical or mathematical techniques), dynamic (methods based on simulation), and hybrid methods (combining static and dynamic). Next, works based on meta-heuristic and data mining algorithms report differ-

To perform verification of digital systems, different approaches have applied heuristic algorithms, for example, genetic algorithms (GA) that apply the evolution theory, where individuals within a population adapt to the conditions to the environment, compete for resources, and generate the evolution of the population through operators such as selection, crossing and mutation. Most of the time, the generation of pseudorandom tests produces worse results than this generation of test sequences. For example, authors in [1] perform a PowerPC architecture verification using genetic algorithms by generating pseudorandom custom instructions and encoding a sequence of instructions with a fixed length. The population size is small to reduce system simulation time. In the same way, in [2] the authors presented an implemented method to generate directed tests through a genetic algorithm, and a cell represents the chromosome in a uniform random distribution in two limits; the different parameters of the method were not fully automated; therefore, extensive knowledge of the evolutionary framework by the user is needed. In addition, in [3], the authors configure a genetic algorithm, which is included in a software platform to improve the functional coverage in a device. In this latter, chromosome coding is based on established instructions, and the proposed method helps to achieve uncovered tasks and increases the hit rate in the test of hard cases, improving the results of the

Some works have implemented ant colony optimization (ACO) and particle swarm optimization (PSO). On the one hand, the ACO uses the imitation of the behavior of ants seeking better paths from the initial place to the food place; ants get their food by means of pheromones and, in this way, other ants walk the paths and can provide positive feedback. In [4], a method based on the ACO that combines the pseudorandom test generation with a software platform that generates the digital system states is presented. The results show a reduction in computational complexity compared to random generation and other heuristics based on GA. On the other hand, the PSO algorithm is based on the interaction between the particles in the swarm. For instance, the authors in [5] present a verification method by using branches as a coverage

metric and a PSO algorithm to perform the validation of RTL implementations.

In large-scale electronic integration design, functional verification is the verification process of a design logic that complies with specific rules design for its operation and manufacturing in an integrated circuit. The functional verification answers the question: "Does the proposed electronics design meet the desired design and functionality requirements?" A complex task with times and high computational efforts is presented mainly in VLSI design. The functional verification is adjacent to a deeper design verification that, in addition to functional verification, adopts nonfunctional aspects such as time, design and power, implemented in the design of mixed circuits for signal processing.

identifying the holes produced and, then adding more constraints. Finally, the process is

Functional Verification of Digital Systems Using Meta-Heuristic Algorithms

http://dx.doi.org/10.5772/intechopen.80048

77

Verification is a very difficult task due to a large volume of possible test cases that exist even in

1. The logical simulation executes the logic of a circuit before building it to obtain its approx-

2. Simulation acceleration applies special-purpose hardware to the logic simulation problem. 3. Programmable logic creates a version of a system; this is expensive and even much slower than real hardware and orders of magnitude faster than simulation. For example, they can

4. Formal verification attempts to prove mathematically that certain requirements are met or

5. Automated verification uses automation to adapt the test bench to changes in the register

Different methodologies have been proposed in order to perform the functional verification. Three different philosophies have been suggested in order to perform the functional verification: static methods (formal methods), dynamic methods (which are based on simulation) and hybrid methods (which does not fall in formal and informal methods). Every philosophy contains different strategies in order to test the digital system functionality. For example, formal methods perform the verification using mathematical expressions to give a formal description of the device's behavior. Examples are model checking, theorem proving, etc.

During the verification based on dynamic methods, the stimuli are used to exercise the functionality, and test benches are also implemented and added to the verification environment. These methods are very scalable and practical. Due to the greater constant complexity of the devices, the use of these methods in the industry is very common. On the other hand, even if

Hybrid methods make up the third category, combining the formal and dynamic techniques. This type of methods is focused on increasing the coverage obtained from the bottleneck guiding the search through the full coverage space. A disadvantage is that its design requires

Searching directly for test vectors sets that appropriately evaluate and examine the functionality of the developed devices is not trivial. For example, for the deterministic methods, the consumption of resources is generally growing exponentially, which depends on the size and architecture of the circuit. Consequently, other solutions have been proposed, i.e., methods that use meta-heuristic are mainly applied to decrease the computational complexity when verify-

the designs are completely verified, it is not easy to guarantee that there are no errors.

6. Specific HDL versions and other heuristics are used to find common problems.

a simple design. The verification can be attacked by many methods:

be used to start the operating system in a processor.

that certain undesired behaviors cannot occur.

broad background about verification techniques.

ing the device.

2.1. Problems solution through meta-heuristic algorithms

executed until a stop criterion is met.

imate behavior.

transfer level code.

There are different elements which work during the functional verification process. A verification system usually consists of several types of components:


Figure 1 shows a pseudorandom test generation scheme where the functional coverage is used as coverage metric. The verification is done using constraints for the stimuli during the device simulation. After a specific number of iterations, the coverage information is reviewed by

Figure 1. HDL verification through pseudorandom test generation.

identifying the holes produced and, then adding more constraints. Finally, the process is executed until a stop criterion is met.

electronics design meet the desired design and functionality requirements?" A complex task with times and high computational efforts is presented mainly in VLSI design. The functional verification is adjacent to a deeper design verification that, in addition to functional verification, adopts nonfunctional aspects such as time, design and power, implemented in the design

There are different elements which work during the functional verification process. A verifica-

1. Test generators are used in the stages of the functional verification where the test vectors are used to detect a fault presented in the specifications and the generation of the code. These generators use a full SAT type of NP resolution that is computationally expensive. In other types of generators, the vectors are created manually, for instance, the patented graphics-based generator (GBM). In short, modern generators create random vectors that are applied statistically on the design verification. Therefore, the users of the generators do

2. The supervisors interpret the stimuli produced by the vector generator for the DUV inputs. Generators create entries with a high level of abstraction, for example, transactions or instructions in assembly language. Supervisors convert this entry into inputs for the DUV

3. The simulators (software tool) excite the circuits under verification to obtain their outputs, depending on the current state of the design and the input vectors injected (verification

4. The monitor converts the state of design and its outputs into an abstraction transaction

5. The verifier validates the score-board data. In some cases, the generator produces the expected results, in addition to the inputs. For those cases, the verifier must validate actual

6. The supervisor is included in the verification environment and manages all the above

Figure 1 shows a pseudorandom test generation scheme where the functional coverage is used as coverage metric. The verification is done using constraints for the stimuli during the device simulation. After a specific number of iterations, the coverage information is reviewed by

vectors). In this case, the software tool has a description of the design network list.

level that will be stored in a score-board database for later verification.

of mixed circuits for signal processing.

76 Digital Systems

tion system usually consists of several types of components:

not clearly specify the requirements to the test generation.

as defined in the design interface specification.

results that match the expected results.

Figure 1. HDL verification through pseudorandom test generation.

components together.

Verification is a very difficult task due to a large volume of possible test cases that exist even in a simple design. The verification can be attacked by many methods:


Different methodologies have been proposed in order to perform the functional verification. Three different philosophies have been suggested in order to perform the functional verification: static methods (formal methods), dynamic methods (which are based on simulation) and hybrid methods (which does not fall in formal and informal methods). Every philosophy contains different strategies in order to test the digital system functionality. For example, formal methods perform the verification using mathematical expressions to give a formal description of the device's behavior. Examples are model checking, theorem proving, etc.

During the verification based on dynamic methods, the stimuli are used to exercise the functionality, and test benches are also implemented and added to the verification environment. These methods are very scalable and practical. Due to the greater constant complexity of the devices, the use of these methods in the industry is very common. On the other hand, even if the designs are completely verified, it is not easy to guarantee that there are no errors.

Hybrid methods make up the third category, combining the formal and dynamic techniques. This type of methods is focused on increasing the coverage obtained from the bottleneck guiding the search through the full coverage space. A disadvantage is that its design requires broad background about verification techniques.

#### 2.1. Problems solution through meta-heuristic algorithms

Searching directly for test vectors sets that appropriately evaluate and examine the functionality of the developed devices is not trivial. For example, for the deterministic methods, the consumption of resources is generally growing exponentially, which depends on the size and architecture of the circuit. Consequently, other solutions have been proposed, i.e., methods that use meta-heuristic are mainly applied to decrease the computational complexity when verifying the device.

Meta-heuristics methods are algorithms to find a global solution using local approximations and heuristics. A meta-heuristic represents a top-level strategy which guides the heuristics to solve a problem. Frequently, not all search details are specified and can be adjusted according to a specific problem. Alternatively, there are general techniques to handle the directed search where optimal local solutions will be avoided; they are employed in the verification context. For instance, in genetic algorithms, a population of individuals is used as an initial set of solutions. The fitness value of a test sequence represents how good an individual is. In addition, the search for solutions is directed by an individual's combination that uses a set of operators.

models) of the device. Therefore, this process occurs when a test sequence is injected into the input of a device and then a new function value is exercised. Then, the value obtained is stored.

Functional Verification of Digital Systems Using Meta-Heuristic Algorithms

http://dx.doi.org/10.5772/intechopen.80048

79

In other words, first, a test vector sequence is injected into the input of the device; then, if a new feature from the intended behavior is covered, the test sequence and the value of the feature exercised are stored. Later, the device states are reviewed and another test vector is produced to verify the "DUV." After this, all the states are verified and the values of coverage metrics are

Figure 2 shows the main steps in the automation of the directed test generation. In this scheme, a verification plan based on based on a functional specification is needed, which describes

A functional coverage model can be described as a functional coverage space where the device behavior is captured. This means that it represents a coverage space that contains the interrelationships that exist between inputs, outputs, tasks, events, conditions and characteristics, which could show the correct functionality of a device with a confidence degree of a device. The coverage model is designed based on the implementation or device specification and a

A coverage metric consists of a heuristic to measure what part of the device behavior has been verified correctly. The main objective of this measure is to reflect which parts of the functionality have been met with correct execution during the processing of the information by the device, i.e., functional coverage (verify that all characteristics meet the specification), statement coverage (verify if the lines of code in the HDL implementation are exercised), branch coverage (analyze if the paths are traveled through the branches during the simulation), and finite-state

The models are fundamental components of the verification process. A coverage model using stimuli, events, constraints, and CoverPoints is generated. It means that the coverage models are representations that map the intended behavior through characteristics, inputs, outputs, and its interrelations. A coverage model can be based on coverage points (CoverPoints).

A coverage model can be defined as the different characteristics to represent the device behavior according to a functional specification that has different constraints. In particular,

Finally, all device states are reviewed and a new test is generated.

what characteristics of the device will be verified and how it will be done.

2.3. Functional coverage models and coverage metrics

machine (check how many states have been covered correctly).

CoverPoints represent the values of each variable in a coverage model.

coverage metric or coverage structure.

Figure 2. Automation of directed test generation.

analyzed.

There are different definitions of meta-heuristics; commonly, a meta-heuristic can be defined as a process that drives other heuristics through a combination of elements to explore and exploit the search spaces. Besides that, it uses learning strategies to manage the information obtained and achieve optimal solutions. Some examples of meta-heuristics are ant colony, artificial bee colony algorithm, genetic algorithms, etc. Many works have used this type of algorithms to find solutions to different problems. Its applicability is suitable in optimization problems where the computation of cost functions is so expensive and influenced by a type of noise. Consequently, meta-heuristics are techniques that find good solutions in large search spaces.

#### 2.2. Automated functional verification in digital systems

In this work, the functional verification of the devices is designed and executed automatically. Moreover, when the functional verification uses the coverage data that is produced from each simulation, it is named as "directed functional verification." A fundamental aspect is the coverage information (integrity measurement) for the test sets and represents the data where the revision is made in the verification. In addition, the analysis of this process allows the generation of new test sequences to evaluate other coverage regions.

The verification by simulation of the device is carried out when the expected functionality is translated into the implementation of RTL according to the specification and the criteria of the designer. Then, the device is reviewed through a series of steps, for example, checkers, monitors, test-benches, etc. In the end, the verification platform gives the coverage results that express the percentage of functionality verified. When reviewing the functional verification definition in [12], the RTL implementation of a device based on a set of features and operational requirements should be provided, to execute the verification, which is composed of the process that guarantees the device implementation that complies with each feature given in the specification.

Automated functional verification involves different elements such as coverage models, control flow graphs, test sequences, and cost functions, among other elements. When these elements interact, a system of test vector generation is formed. Some verification methods use this type of scheme to perform verification of digital devices.

An important case occurs when the test generation uses feedback information to explore new behavior regions, when this happens it is named as coverage-directed test generation. There are different definitions, for instance, according to [12], this generation allows to produce different test sequences to exercise different functionalities (characteristics of the coverage models) of the device. Therefore, this process occurs when a test sequence is injected into the input of a device and then a new function value is exercised. Then, the value obtained is stored. Finally, all device states are reviewed and a new test is generated.

In other words, first, a test vector sequence is injected into the input of the device; then, if a new feature from the intended behavior is covered, the test sequence and the value of the feature exercised are stored. Later, the device states are reviewed and another test vector is produced to verify the "DUV." After this, all the states are verified and the values of coverage metrics are analyzed.

Figure 2 shows the main steps in the automation of the directed test generation. In this scheme, a verification plan based on based on a functional specification is needed, which describes what characteristics of the device will be verified and how it will be done.

### 2.3. Functional coverage models and coverage metrics

Meta-heuristics methods are algorithms to find a global solution using local approximations and heuristics. A meta-heuristic represents a top-level strategy which guides the heuristics to solve a problem. Frequently, not all search details are specified and can be adjusted according to a specific problem. Alternatively, there are general techniques to handle the directed search where optimal local solutions will be avoided; they are employed in the verification context. For instance, in genetic algorithms, a population of individuals is used as an initial set of solutions. The fitness value of a test sequence represents how good an individual is. In addition, the search for solutions is directed by an individual's combination that uses a set of

There are different definitions of meta-heuristics; commonly, a meta-heuristic can be defined as a process that drives other heuristics through a combination of elements to explore and exploit the search spaces. Besides that, it uses learning strategies to manage the information obtained and achieve optimal solutions. Some examples of meta-heuristics are ant colony, artificial bee colony algorithm, genetic algorithms, etc. Many works have used this type of algorithms to find solutions to different problems. Its applicability is suitable in optimization problems where the computation of cost functions is so expensive and influenced by a type of noise. Consequently, meta-heuristics are techniques that find good solutions in large search spaces.

In this work, the functional verification of the devices is designed and executed automatically. Moreover, when the functional verification uses the coverage data that is produced from each simulation, it is named as "directed functional verification." A fundamental aspect is the coverage information (integrity measurement) for the test sets and represents the data where the revision is made in the verification. In addition, the analysis of this process allows the

The verification by simulation of the device is carried out when the expected functionality is translated into the implementation of RTL according to the specification and the criteria of the designer. Then, the device is reviewed through a series of steps, for example, checkers, monitors, test-benches, etc. In the end, the verification platform gives the coverage results that express the percentage of functionality verified. When reviewing the functional verification definition in [12], the RTL implementation of a device based on a set of features and operational requirements should be provided, to execute the verification, which is composed of the process that guarantees the device implementation that complies with each feature given in the

Automated functional verification involves different elements such as coverage models, control flow graphs, test sequences, and cost functions, among other elements. When these elements interact, a system of test vector generation is formed. Some verification methods use this

An important case occurs when the test generation uses feedback information to explore new behavior regions, when this happens it is named as coverage-directed test generation. There are different definitions, for instance, according to [12], this generation allows to produce different test sequences to exercise different functionalities (characteristics of the coverage

2.2. Automated functional verification in digital systems

type of scheme to perform verification of digital devices.

generation of new test sequences to evaluate other coverage regions.

operators.

78 Digital Systems

specification.

A functional coverage model can be described as a functional coverage space where the device behavior is captured. This means that it represents a coverage space that contains the interrelationships that exist between inputs, outputs, tasks, events, conditions and characteristics, which could show the correct functionality of a device with a confidence degree of a device. The coverage model is designed based on the implementation or device specification and a coverage metric or coverage structure.

A coverage metric consists of a heuristic to measure what part of the device behavior has been verified correctly. The main objective of this measure is to reflect which parts of the functionality have been met with correct execution during the processing of the information by the device, i.e., functional coverage (verify that all characteristics meet the specification), statement coverage (verify if the lines of code in the HDL implementation are exercised), branch coverage (analyze if the paths are traveled through the branches during the simulation), and finite-state machine (check how many states have been covered correctly).

The models are fundamental components of the verification process. A coverage model using stimuli, events, constraints, and CoverPoints is generated. It means that the coverage models are representations that map the intended behavior through characteristics, inputs, outputs, and its interrelations. A coverage model can be based on coverage points (CoverPoints). CoverPoints represent the values of each variable in a coverage model.

A coverage model can be defined as the different characteristics to represent the device behavior according to a functional specification that has different constraints. In particular,

Figure 2. Automation of directed test generation.

hand and considering the particle dimension, the attitude of the changes represents the probability, which can be "1" or "0." For these reasons, the sigmoid function S Vð Þ id [13] transforms velocities to probability values and obtains a zero state for each particle, see Eq. 1. If vid is high, the particle bit will probably be 1, and if the vid is low, the particle bit will probably be 0, where vid is a value in the range ½ �¼ Vmin; Vmax ½ � 0:0; 1:0 , ensuring that two possible values take the

It is important to control the influence (from paths by each particle and other particles in the population), because the particles can move to the regions where the fitness variables have the best values. In this case, the pseudorandom values have produced better results when they are the mutation operators in the genetic algorithms. If the PSO algorithm is expressed in real numbers, a great number of problems are presented in binary domains, requiring extra oper-

In binary versions, the PSO algorithm uses binary data directly with a re-initialization process, see Algorithm 1. The latter is composed of instructions or rules, where a particle is represented by a set and its elements are binary sequences. In this algorithm, in the first step, the position xi

equal to xid. In this case, the velocities vid are compared. For every particle dimension, xid has a value "0" when rid position fitness is less than s vð Þ idð Þt (from sigmoidal speed function), but it

are initialized and computed for each particle. In the second step, G xi !

1 1 þ exp ð Þ �vid

Functional Verification of Digital Systems Using Meta-Heuristic Algorithms

http://dx.doi.org/10.5772/intechopen.80048

(1)

81

!

is better, then its best position pid is

S Vð Þ¼ id

dimension bit (for the sequences): "1" or "0."

ations for converting real values to binary values.

and its best previous position pid are compared. If G xi !

has a value "1." These steps are executed until stop condition is reached.

Algorithm 1. Pseudocode of Binary PSO with a re-initialization process (BPSOr).

and velocity G xi !

Figure 3. Functional coverage model.

the way of representing the behavior affects the granularity of the model, that is, a model with more characteristics can represent the original intention more efficiently, and as a consequence, it has a higher level of granularity. The accuracy of that model describes the implementation. The coverage model may contain explicit and implicit device behavior features. Moreover, the models are designed according to the device specification and implementation. Figure 3 shows a coverage model where fidelity of a model determines how closely the model defines the actual requirements of a device behavior.

### 3. Verification method using BPSOr algorithm

The proposed verification method uses the BPSOr algorithm, which is based on several psychological aspects and social elements. In this social-cognitive context, individuals must interact among them, where the best performance occurs within the particle group and previous behaviors. Each individual is a particle, each particle group is a neighborhood, and each cognitive and social particle behavior is influenced by an improved performance from the groups.

At this point, two proposals are presented: lbest (local-best) and gbest (global-best). In the first proposal, the particle with the best performance in its groups affects to the remaining parts. In the second proposal, the swarm is important, because particles are connected among them, where the best performance of a particle from the swarm affects it and the results are improved.

In the swarm, each dimension is analyzed, and there are two main computational problems: memory and velocities; the first one establishes the best particle location, comparing the actual position and other better ones by means of the search. In addition, a key metric is the rate of change, which is computed for the particle based on the velocities to obtain gbest (the best global) and lbest (the best local solution). Incremental changes in both learning and attitude are simulated, providing the granularity of the search in the problem space. On the one hand, speed represents changes in probabilities, which may have the value "1" or "0." On the other hand and considering the particle dimension, the attitude of the changes represents the probability, which can be "1" or "0." For these reasons, the sigmoid function S Vð Þ id [13] transforms velocities to probability values and obtains a zero state for each particle, see Eq. 1. If vid is high, the particle bit will probably be 1, and if the vid is low, the particle bit will probably be 0, where vid is a value in the range ½ �¼ Vmin; Vmax ½ � 0:0; 1:0 , ensuring that two possible values take the dimension bit (for the sequences): "1" or "0."

$$S(V\_{id}) = \frac{1}{1 + \exp\left(-v\_{id}\right)}\tag{1}$$

It is important to control the influence (from paths by each particle and other particles in the population), because the particles can move to the regions where the fitness variables have the best values. In this case, the pseudorandom values have produced better results when they are the mutation operators in the genetic algorithms. If the PSO algorithm is expressed in real numbers, a great number of problems are presented in binary domains, requiring extra operations for converting real values to binary values.

In binary versions, the PSO algorithm uses binary data directly with a re-initialization process, see Algorithm 1. The latter is composed of instructions or rules, where a particle is represented by a set and its elements are binary sequences. In this algorithm, in the first step, the position xi ! and velocity G xi ! are initialized and computed for each particle. In the second step, G xi ! and its best previous position pid are compared. If G xi ! is better, then its best position pid is equal to xid. In this case, the velocities vid are compared. For every particle dimension, xid has a value "0" when rid position fitness is less than s vð Þ idð Þt (from sigmoidal speed function), but it has a value "1." These steps are executed until stop condition is reached.

Algorithm 1. Pseudocode of Binary PSO with a re-initialization process (BPSOr).

the way of representing the behavior affects the granularity of the model, that is, a model with more characteristics can represent the original intention more efficiently, and as a consequence, it has a higher level of granularity. The accuracy of that model describes the implementation. The coverage model may contain explicit and implicit device behavior features. Moreover, the models are designed according to the device specification and implementation. Figure 3 shows a coverage model where fidelity of a model determines how closely the model defines the

The proposed verification method uses the BPSOr algorithm, which is based on several psychological aspects and social elements. In this social-cognitive context, individuals must interact among them, where the best performance occurs within the particle group and previous behaviors. Each individual is a particle, each particle group is a neighborhood, and each cognitive and social particle behavior is influenced by an improved performance from the

At this point, two proposals are presented: lbest (local-best) and gbest (global-best). In the first proposal, the particle with the best performance in its groups affects to the remaining parts. In the second proposal, the swarm is important, because particles are connected among them, where the best performance of a particle from the swarm affects it and the results are

In the swarm, each dimension is analyzed, and there are two main computational problems: memory and velocities; the first one establishes the best particle location, comparing the actual position and other better ones by means of the search. In addition, a key metric is the rate of change, which is computed for the particle based on the velocities to obtain gbest (the best global) and lbest (the best local solution). Incremental changes in both learning and attitude are simulated, providing the granularity of the search in the problem space. On the one hand, speed represents changes in probabilities, which may have the value "1" or "0." On the other

actual requirements of a device behavior.

Figure 3. Functional coverage model.

80 Digital Systems

groups.

improved.

3. Verification method using BPSOr algorithm

In this pseudocode, Vmax and Vmin are constraints of each probability of change, where each position of the particle is considered, and the re-initialization process avoids local solutions and covers new behavior regions. This process is based on population-based measures, and if the best global performance is greater than the best current performance of the swarm, then the swarm of particles is initialized again. Consequently, the best particle position and the best particle of the population are stored. In addition, both the current positions and particle velocities are re-initialized. To re-initialize the velocities, a probability value is computed, whose aim is to avoid a convergence in an optimal local solution.

Figure 4 shows the flow diagram of BPSOr algorithm. The different advantages of the binary PSO algorithm with re-initialization (BPSOr) enable to produce test sequences, operating in the

Functional Verification of Digital Systems Using Meta-Heuristic Algorithms

http://dx.doi.org/10.5772/intechopen.80048

83

A proposed interface based on heuristic algorithms and a software tool is used. Moreover, some steps to verify the digital systems are performed. The description of the test generation method implemented in this work is shown in Algorithm 2. Firstly, the device parameters must be configured and initiated. In the same way, for the meta-heuristic process, several parameters are initiated and assigned based on the operational requirements (specifications)

In this case, BPSOr algorithm generates the test sequences; then, a simulation tool to evaluate them is used. The coverage information from device simulation is reviewed and saved. Then, the fitness variables are computed and the best values are stored, which are used in the new

Local-best topology was implemented in the verification method to perform different experiments. The scheme of this topology is shown in Figure 5 where test vector sequences are clustering in some sets representing groups of particles; in this case, the particles or test sequences are affected by its fitness value and the best in its neighborhood. The best particle consists of the test sequence with the best fitness value in the group. Additionally, each test sequence or particle can communicate with others in its group. Later, in every iterations, the

Global-best configuration is represented in Figure 6; in this topology, each particle is affected by the best solution in the swarm. All particles are included in the same group and they move toward the best solution. After every algorithm iteration, the test sequence with the best

On the other hand, the fitness function used in the algorithms is shown in Eq. 3. This function is focused on the percentage of holes produced in specific CoverPoints (Peh). Therefore, the

verification context and analyzing the devices, which are being verified.

and implementation. Then, the set of device parameters are initialized. Algorithm 2. General method of generation of test vector sequences.

set of particles is directed toward the best particle in the swarm.

performance guides to the others through the search space.

4. Test vector generation method

iteration.

The main aspect of this algorithm is the decision when the bit string has a value of 1 or 0, which is based on the probability and is defined as a function of personal and social factors, see Eq. 2, where: (a) vidð Þ t � 1 is a measure of the current probability (individual predisposition) for the decision of 1 or 0; (b) φ<sup>1</sup> and φ<sup>2</sup> are positive random numbers, which are obtained form an uniform distribution, and they represent predefined upper limits; (c) r1 and r2 are positive random numbers, which can take some value from 0 to 1; (d) xidð Þt describes the current state, when a bit-string d is analyzed for the individual i; (e) t represents the current discrete time, and t � 1 represents the previous discrete time; (f) pid is the variable that represents the best state and has a value of 1 if the individuals with the best success are located when xid is 1 and 0 in otherwise; (g) pgd is the best neighbor and has a value of 1 if the best success is reached by some number at the moment of examining the neighborhood with state 1 and has a value of 0 in the other case; and (h) rid describes a vector or data structure of random numbers, which are obtained by using an uniform distribution among 0.0 and 1.0, and Pre represents the re-initialization factor with real value in the unit interval 0.0 to 1.0.

$$\upsilon\_t(t) = \upsilon\_{\rm id}(t-1) + r1 \times \rho\_1 \left(p\_{\rm id} - \mathbf{x}\_{\rm id}(t-1)\right) + r2 \times \rho\_2 \left(p\_{\rm gd} - \mathbf{x}\_{\rm id}(t-1)\right) \tag{2}$$

Figure 4. Flow diagram of BPSOr algorithm.

Figure 4 shows the flow diagram of BPSOr algorithm. The different advantages of the binary PSO algorithm with re-initialization (BPSOr) enable to produce test sequences, operating in the verification context and analyzing the devices, which are being verified.

### 4. Test vector generation method

In this pseudocode, Vmax and Vmin are constraints of each probability of change, where each position of the particle is considered, and the re-initialization process avoids local solutions and covers new behavior regions. This process is based on population-based measures, and if the best global performance is greater than the best current performance of the swarm, then the swarm of particles is initialized again. Consequently, the best particle position and the best particle of the population are stored. In addition, both the current positions and particle velocities are re-initialized. To re-initialize the velocities, a probability value is computed,

The main aspect of this algorithm is the decision when the bit string has a value of 1 or 0, which is based on the probability and is defined as a function of personal and social factors, see Eq. 2, where: (a) vidð Þ t � 1 is a measure of the current probability (individual predisposition) for the decision of 1 or 0; (b) φ<sup>1</sup> and φ<sup>2</sup> are positive random numbers, which are obtained form an uniform distribution, and they represent predefined upper limits; (c) r1 and r2 are positive random numbers, which can take some value from 0 to 1; (d) xidð Þt describes the current state, when a bit-string d is analyzed for the individual i; (e) t represents the current discrete time, and t � 1 represents the previous discrete time; (f) pid is the variable that represents the best state and has a value of 1 if the individuals with the best success are located when xid is 1 and 0 in otherwise; (g) pgd is the best neighbor and has a value of 1 if the best success is reached by some number at the moment of examining the neighborhood with state 1 and has a value of 0 in the other case; and (h) rid describes a vector or data structure of random numbers, which are obtained by using an uniform distribution among 0.0 and 1.0, and Pre represents the re-initialization factor with real value

vtðÞ¼ <sup>t</sup> vidð Þþ <sup>t</sup> � <sup>1</sup> <sup>r</sup><sup>1</sup> � <sup>φ</sup><sup>1</sup> pid � xidð Þ <sup>t</sup> � <sup>1</sup> <sup>þ</sup> <sup>r</sup><sup>2</sup> � <sup>φ</sup><sup>2</sup> pgd � xidð Þ <sup>t</sup> � <sup>1</sup>

(2)

whose aim is to avoid a convergence in an optimal local solution.

in the unit interval 0.0 to 1.0.

82 Digital Systems

Figure 4. Flow diagram of BPSOr algorithm.

A proposed interface based on heuristic algorithms and a software tool is used. Moreover, some steps to verify the digital systems are performed. The description of the test generation method implemented in this work is shown in Algorithm 2. Firstly, the device parameters must be configured and initiated. In the same way, for the meta-heuristic process, several parameters are initiated and assigned based on the operational requirements (specifications) and implementation. Then, the set of device parameters are initialized.

Algorithm 2. General method of generation of test vector sequences.

In this case, BPSOr algorithm generates the test sequences; then, a simulation tool to evaluate them is used. The coverage information from device simulation is reviewed and saved. Then, the fitness variables are computed and the best values are stored, which are used in the new iteration.

Local-best topology was implemented in the verification method to perform different experiments. The scheme of this topology is shown in Figure 5 where test vector sequences are clustering in some sets representing groups of particles; in this case, the particles or test sequences are affected by its fitness value and the best in its neighborhood. The best particle consists of the test sequence with the best fitness value in the group. Additionally, each test sequence or particle can communicate with others in its group. Later, in every iterations, the set of particles is directed toward the best particle in the swarm.

Global-best configuration is represented in Figure 6; in this topology, each particle is affected by the best solution in the swarm. All particles are included in the same group and they move toward the best solution. After every algorithm iteration, the test sequence with the best performance guides to the others through the search space.

On the other hand, the fitness function used in the algorithms is shown in Eq. 3. This function is focused on the percentage of holes produced in specific CoverPoints (Peh). Therefore, the

Figure 5. Groups of test vector sequences using the local-best topology.

Figure 6. Groups of test vector sequences using the global-best topology.

problem is translated to maximize the number of points covered and, at the same time, minimize the percentage of holes in specific behavior regions.

$$f\_1 = \text{MAX}\left(\frac{1}{P\_{eh}}\right) \tag{3}$$

respective simulation. Then, after the last sequence is completed, the cost values are quantified. Their calculation dependents of the points and holes are determined during the respective simulation. The information obtained is delivered to the generator module of test sequences. Therefore, a new sequence is generated and the process is repeated while the stop condition is

Functional Verification of Digital Systems Using Meta-Heuristic Algorithms

http://dx.doi.org/10.5772/intechopen.80048

85

The proposed verification system is composed of several modules that are connected through an interface between C and SystemVerilog languages. Figure 7 shows a scheme where the system couples the device under verification and the verification process is performed at the

The proposed verification method is validated through different experiments using two digital systems. Additionally, the performance of the BPSOr, genetic algorithm, PSO, and random test generation is compared. RTL implementation of the devices was employed as benchmarks in the verification platform. The applicability of this type of method focuses on the block-level verification of IP cores because the automatic verification depends on the controllability degree of events generated from the stimulus during the device simulation. PSO algorithm with a reinitialization mechanism can be more complex computable, however, because this algorithm

not reached.

Figure 7. Proposed scheme.

RTL level.

5. Case study

Test generation sequences are produced in the verification environment to verify the devices. In the beginning, a new binary sequence is tested and analyzed in the device, running the Functional Verification of Digital Systems Using Meta-Heuristic Algorithms http://dx.doi.org/10.5772/intechopen.80048 85

Figure 7. Proposed scheme.

respective simulation. Then, after the last sequence is completed, the cost values are quantified. Their calculation dependents of the points and holes are determined during the respective simulation. The information obtained is delivered to the generator module of test sequences. Therefore, a new sequence is generated and the process is repeated while the stop condition is not reached.

The proposed verification system is composed of several modules that are connected through an interface between C and SystemVerilog languages. Figure 7 shows a scheme where the system couples the device under verification and the verification process is performed at the RTL level.

### 5. Case study

problem is translated to maximize the number of points covered and, at the same time,

<sup>f</sup> <sup>1</sup> <sup>¼</sup> MAX <sup>1</sup>

Test generation sequences are produced in the verification environment to verify the devices. In the beginning, a new binary sequence is tested and analyzed in the device, running the

Peh 

(3)

minimize the percentage of holes in specific behavior regions.

Figure 6. Groups of test vector sequences using the global-best topology.

Figure 5. Groups of test vector sequences using the local-best topology.

84 Digital Systems

The proposed verification method is validated through different experiments using two digital systems. Additionally, the performance of the BPSOr, genetic algorithm, PSO, and random test generation is compared. RTL implementation of the devices was employed as benchmarks in the verification platform. The applicability of this type of method focuses on the block-level verification of IP cores because the automatic verification depends on the controllability degree of events generated from the stimulus during the device simulation. PSO algorithm with a reinitialization mechanism can be more complex computable, however, because this algorithm achieves fine solutions very quickly, the verification time could be reduced. The best scenarios with different features of BPSOr algorithm will be presented.

functions could not give a difference regarding to other regions. Even, if more algorithm iterations are used, then the test sequences generated will cover the same behavior regions and the holes will not be covered. It is important to design strategies by focusing on the regions that are not easily covered. One strategy consists of a group the CoverPoints in sets with different weights to produce higher behavior areas. In addition, efficient search algorithms are required. Therefore, meta-heuristic algorithms can guide the search usefully and exercise

Functional Verification of Digital Systems Using Meta-Heuristic Algorithms

http://dx.doi.org/10.5772/intechopen.80048

87

The functional verification method based on meta-heuristic algorithms can test the functionality regions by focusing on specific behavior parts that can required more exploration. In these experiments, the verification method is used to verify two different digital systems. First, to show the performance of the GA a set of experiments will be developed. The genetic algorithm used was a binary version where the best individual remained in the next epoch (elitism). Table 1 contains the parameters used for three different scenarios. Each experiment was run 30 times and then the coverage percentages and average time were stored. For instance, in the first case, a population of 100 individuals was configured with a crossover of 0.5 percentage

Table 2 shows the obtained results for four best scenarios. Reviewing the results, in the third scenario, a few number of iterations was required in order to reach 100 coverage percentage.

Crossover percentage 0.5 0.5 0.45 0.45 Mutation percentage 0.001 0.001 0.0005 0.003 Population size 150 120 100 100 f f <sup>1</sup> f <sup>1</sup> f <sup>1</sup> f <sup>1</sup>

Final values 1 2 3 4 Best value 100 100 100 100 Worst value 95.44 97.13 99.08 98.30 Average value 98.23 99.12 99.87 99.62 Average evaluations 8090 7944 7353 7623 Average Time (min) 178.09 173.05 160.46 165.36

Table 2. Results obtained using a genetic algorithm in the platform to verify a UART-IP core.

123 4

all functionality of the device.

and a mutation of 0.001 percentage.

Besides, the average time used was 160.46 minutes.

Parameters GA scenarios

Table 1. GA algorithm settings for four different scenarios.

5.1. Experiments

Devices such as a UART-IP core were employed in order to perform its verification. The UART-IP can be used as a transmitter and receiver. A 16-bit address bus and an 8-bit data bus are included in the IP core. Its verification was based on the functional specification and the RTL code implementation. The coverage model was implemented using 785 bins in 12 CoverPoints. The initialization and configuration were performed based on the specification. Besides, the verification of a FIFO memory was performed using a coverage model with 784 bins. The memory is often contained in devices such as processors, UARTs, interfaces, and so on. Its implementation was designed in Verilog language and the configuration of the signals was controlled according to the features described in the functional specification.

To develop the proposed experiments, different values of parameters were used, which were included in several scenarios. Therefore, a scenario consists of a set of parameters that are used for the meta-heuristic algorithm. In the case of BPSOr algorithm, the parameters such as topology (global or local), velocity values, number of particles, and ϕ value were modified. Additionally, running a scenario of a defined number of times with a specific parameter configuration is defined as an experiment. The size of the swarm used was among 3 and 16 particles. Also, "global-best" and "local-best" topologies were implemented in the algorithm. The ϕ variable was modified with values from 2.0 to 4.0 for the scenarios. On the other hand, the evaluation of the test sequences was performed using two fitness functions, which are based on the coverage obtained. Basically, these functions get the CoverPoints and the holes generated at the run-time. When the simulation of a device ends, the coverage produced is sent to the test generator module and, finally, a new test is generated.

The results obtained are expressed in the best scenarios where information such as the best, the average, the total iterations, etc. are included. In addition, the binary test sequences were evaluated by modifying their number of elements or length. For example, if a particle is composed of two sequences, then its height is equal to 2.

The obtained results from the best scenarios will be presented to analyze the BPSOr performance. Furthermore, a genetic algorithm (GA) with elitism feature was implemented. Some algorithm parameters such as crossover percentage, mutation percentage, maximal number of evaluations, and population size were modified. A stop criterion was defined using the total number of evaluations. Besides, all CoverPoints were clustered focusing in the points that required to be exercised.

The experiments were performed using a computer with Linux Fedora Core 23. The features of the computer are as follows: Processor model: Intel Core i7-4790 K CPU-4 GHz., RAM: 8 GB, CPU: 4298.5 MHz, and Cache: 8192 KB. Additionally, all experiments were performed over a Linux Fedora Operating System, where the verification platform was successfully installed. After this, the obtained data were saved and reviewed. The obtained results from simulations were handled as statistical information to obtain the best fitness values.

When the verification process is performed, some characteristics could not be exercised due to different factors; for instance, if the behavior regions have the same cost values, then the fitness functions could not give a difference regarding to other regions. Even, if more algorithm iterations are used, then the test sequences generated will cover the same behavior regions and the holes will not be covered. It is important to design strategies by focusing on the regions that are not easily covered. One strategy consists of a group the CoverPoints in sets with different weights to produce higher behavior areas. In addition, efficient search algorithms are required. Therefore, meta-heuristic algorithms can guide the search usefully and exercise all functionality of the device.

#### 5.1. Experiments

achieves fine solutions very quickly, the verification time could be reduced. The best scenarios

Devices such as a UART-IP core were employed in order to perform its verification. The UART-IP can be used as a transmitter and receiver. A 16-bit address bus and an 8-bit data bus are included in the IP core. Its verification was based on the functional specification and the RTL code implementation. The coverage model was implemented using 785 bins in 12 CoverPoints. The initialization and configuration were performed based on the specification. Besides, the verification of a FIFO memory was performed using a coverage model with 784 bins. The memory is often contained in devices such as processors, UARTs, interfaces, and so on. Its implementation was designed in Verilog language and the configuration of the signals

To develop the proposed experiments, different values of parameters were used, which were included in several scenarios. Therefore, a scenario consists of a set of parameters that are used for the meta-heuristic algorithm. In the case of BPSOr algorithm, the parameters such as topology (global or local), velocity values, number of particles, and ϕ value were modified. Additionally, running a scenario of a defined number of times with a specific parameter configuration is defined as an experiment. The size of the swarm used was among 3 and 16 particles. Also, "global-best" and "local-best" topologies were implemented in the algorithm. The ϕ variable was modified with values from 2.0 to 4.0 for the scenarios. On the other hand, the evaluation of the test sequences was performed using two fitness functions, which are based on the coverage obtained. Basically, these functions get the CoverPoints and the holes generated at the run-time. When the simulation of a device ends, the coverage produced is sent

The results obtained are expressed in the best scenarios where information such as the best, the average, the total iterations, etc. are included. In addition, the binary test sequences were evaluated by modifying their number of elements or length. For example, if a particle is

The obtained results from the best scenarios will be presented to analyze the BPSOr performance. Furthermore, a genetic algorithm (GA) with elitism feature was implemented. Some algorithm parameters such as crossover percentage, mutation percentage, maximal number of evaluations, and population size were modified. A stop criterion was defined using the total number of evaluations. Besides, all CoverPoints were clustered focusing in the points that

The experiments were performed using a computer with Linux Fedora Core 23. The features of the computer are as follows: Processor model: Intel Core i7-4790 K CPU-4 GHz., RAM: 8 GB, CPU: 4298.5 MHz, and Cache: 8192 KB. Additionally, all experiments were performed over a Linux Fedora Operating System, where the verification platform was successfully installed. After this, the obtained data were saved and reviewed. The obtained results from simulations

When the verification process is performed, some characteristics could not be exercised due to different factors; for instance, if the behavior regions have the same cost values, then the fitness

was controlled according to the features described in the functional specification.

with different features of BPSOr algorithm will be presented.

86 Digital Systems

to the test generator module and, finally, a new test is generated.

were handled as statistical information to obtain the best fitness values.

composed of two sequences, then its height is equal to 2.

required to be exercised.

The functional verification method based on meta-heuristic algorithms can test the functionality regions by focusing on specific behavior parts that can required more exploration. In these experiments, the verification method is used to verify two different digital systems. First, to show the performance of the GA a set of experiments will be developed. The genetic algorithm used was a binary version where the best individual remained in the next epoch (elitism). Table 1 contains the parameters used for three different scenarios. Each experiment was run 30 times and then the coverage percentages and average time were stored. For instance, in the first case, a population of 100 individuals was configured with a crossover of 0.5 percentage and a mutation of 0.001 percentage.

Table 2 shows the obtained results for four best scenarios. Reviewing the results, in the third scenario, a few number of iterations was required in order to reach 100 coverage percentage. Besides, the average time used was 160.46 minutes.


Table 1. GA algorithm settings for four different scenarios.


Table 2. Results obtained using a genetic algorithm in the platform to verify a UART-IP core.

Table 3 shows the four best scenarios using the BPSOr algorithm to perform the verification of a IP-UART core. One of the parameters that was changed is the swarm size. In this case, 3, 6, 9, and 12 particles were used in the proposed method. For example, in the first scenario, the parameters used were: 9 particles, 3 neighborhoods, ϕ = 4.0, global-best topology, and the f <sup>1</sup> cost function.

After, the experiments were performed, the obtained information was reviewed and the best results for the four scenarios are presented in Table 4. According to these results, using the fourth scenario, the average number of iterations was 1065 in 23.085 minutes to achieve 100 coverage percentage.

Table 5 contains the obtained results for four algorithms: GA, pseudorandom, BPSO, BPSOr, etc. In these experiments, different parameters over the verification platform were changed. In addition, four of the best scenarios are presented showing the best, worst, and average coverage. Also, the average number of iterations and the average time are added.

be covered very quickly. In the case of genetic algorithms, the population of individuals can evolve by modifying the test sequences to exercise new features of the device. One of the problems is that the guide is based on the evaluations of all population which evolves by means of operators such as mutation, crossover, etc.; when the population size increases, most number of evaluations in each epoch is required; thus, the simulation time is increased. During the functional verification, percentages over 99% were reached using the UART-IP core and the

Table 5. Functional coverage results obtained using genetic algorithms, pseudorandom generation algorithms, PSO and

Final values Binary GA pseudorandom BPSOr BPSO Best value 100 96.09 100 100 Worst value 99.08 94.53 100 100 Average value 99.87 95.27 100 100 Average evaluations 7353 8000 2137 2194 Average Time (min) 160.46 174.73 46.538 48.755

Functional Verification of Digital Systems Using Meta-Heuristic Algorithms

http://dx.doi.org/10.5772/intechopen.80048

89

On the other hand, when the BPSOr algorithm was used in the verification platform, more functionality was exercised requiring less number of evaluations. Different from the original version of PSO, the BPSOr algorithm can re-initialize the particle swarm based on the current best coverage percentage and the number of iterations performed on run time. It means, if the coverage percentage is not increased, then the best solution, the best particle positions, and the positions and velocities of the particles are reinitialized. This mechanism is used to avoid to fall in local optima solutions and guide the search to behavior regions not explored. In addition, in most of the experiments, the coverage results obtained with BPSOr algorithm were higher than PSO algorithm. It is important to mention that meta-heuristics can be useful techniques to

Complexity of digital systems is constantly increasing; therefore, the implementation of new methods to improve the confidence and reduce the time of design is required. In this chapter, a verification method based on the use of meta-heuristic algorithms is described. Techniques such as genetic algorithms and particle swarm optimization algorithms were used to verify the digital systems, and a comparison was presented. Also, elements such as coverage models, fitness functions, and software tools are included. According to the results, the use of metaheuristic algorithms such as the BPSOr algorithm and fitness functions can be useful to exercise the device functionality by focusing on behavior regions that have not been covered. In the case of GA, the coverage results obtained show that a lower number of iterations than pseudorandom test generation is required. Although in the best coverage scenarios a coverage percentage of 100 was obtained, it was observed that when increasing the number of

FIFO memory using less time than pseudorandom generation.

guide the test generation during the verification of devices.

6. Conclusions

BPSOr to verify a UART-IP core.

Commonly, the pseudorandom test generation is used to exercise the device functionality during the functional verification. Reviewing the results, at the start, the coverage percentage was increased very quickly. However, after achieving a coverage threshold percentage, more iterations to increase the coverage were needed. For instance, in the case of the UART-IP core, percentages over 95% were obtained.

According to the results the use of meta-heuristic algorithms to guide the search during the functional verification of digital systems is a good alternative because the behavior areas can


Table 3. Configuration parameters of the BPSOr algorithm for four scenarios using the UART-IP core for two sequence solutions.


Table 4. Results obtained for four different scenarios using the BPSOr algorithm in the proposed platform with a UART-IP core.


Table 5. Functional coverage results obtained using genetic algorithms, pseudorandom generation algorithms, PSO and BPSOr to verify a UART-IP core.

be covered very quickly. In the case of genetic algorithms, the population of individuals can evolve by modifying the test sequences to exercise new features of the device. One of the problems is that the guide is based on the evaluations of all population which evolves by means of operators such as mutation, crossover, etc.; when the population size increases, most number of evaluations in each epoch is required; thus, the simulation time is increased. During the functional verification, percentages over 99% were reached using the UART-IP core and the FIFO memory using less time than pseudorandom generation.

On the other hand, when the BPSOr algorithm was used in the verification platform, more functionality was exercised requiring less number of evaluations. Different from the original version of PSO, the BPSOr algorithm can re-initialize the particle swarm based on the current best coverage percentage and the number of iterations performed on run time. It means, if the coverage percentage is not increased, then the best solution, the best particle positions, and the positions and velocities of the particles are reinitialized. This mechanism is used to avoid to fall in local optima solutions and guide the search to behavior regions not explored. In addition, in most of the experiments, the coverage results obtained with BPSOr algorithm were higher than PSO algorithm. It is important to mention that meta-heuristics can be useful techniques to guide the test generation during the verification of devices.

### 6. Conclusions

Table 3 shows the four best scenarios using the BPSOr algorithm to perform the verification of a IP-UART core. One of the parameters that was changed is the swarm size. In this case, 3, 6, 9, and 12 particles were used in the proposed method. For example, in the first scenario, the parameters used were: 9 particles, 3 neighborhoods, ϕ = 4.0, global-best topology, and the f <sup>1</sup>

After, the experiments were performed, the obtained information was reviewed and the best results for the four scenarios are presented in Table 4. According to these results, using the fourth scenario, the average number of iterations was 1065 in 23.085 minutes to achieve 100

Table 5 contains the obtained results for four algorithms: GA, pseudorandom, BPSO, BPSOr, etc. In these experiments, different parameters over the verification platform were changed. In addition, four of the best scenarios are presented showing the best, worst, and average cover-

Commonly, the pseudorandom test generation is used to exercise the device functionality during the functional verification. Reviewing the results, at the start, the coverage percentage was increased very quickly. However, after achieving a coverage threshold percentage, more iterations to increase the coverage were needed. For instance, in the case of the UART-IP core,

According to the results the use of meta-heuristic algorithms to guide the search during the functional verification of digital systems is a good alternative because the behavior areas can

Table 3. Configuration parameters of the BPSOr algorithm for four scenarios using the UART-IP core for two sequence

Table 4. Results obtained for four different scenarios using the BPSOr algorithm in the proposed platform with a UART-

Scenario Number of evaluations Best value Worst value Average Time (min) 2137 100 100 100 46.53 1608 100 100 100 37.12 2719 100 100 100 60.954 1065 100 100 100 23.085

1234

age. Also, the average number of iterations and the average time are added.

Number of particles 9 6 12 3 Number of neighborhoods 2 2 4 1 ϕ max 4 4 4 4 Topology G-best G-best G-best G-best f f1 f1 f1 f1

cost function.

88 Digital Systems

solutions.

IP core.

coverage percentage.

percentages over 95% were obtained.

Parameters BPSOr scenarios

Complexity of digital systems is constantly increasing; therefore, the implementation of new methods to improve the confidence and reduce the time of design is required. In this chapter, a verification method based on the use of meta-heuristic algorithms is described. Techniques such as genetic algorithms and particle swarm optimization algorithms were used to verify the digital systems, and a comparison was presented. Also, elements such as coverage models, fitness functions, and software tools are included. According to the results, the use of metaheuristic algorithms such as the BPSOr algorithm and fitness functions can be useful to exercise the device functionality by focusing on behavior regions that have not been covered. In the case of GA, the coverage results obtained show that a lower number of iterations than pseudorandom test generation is required. Although in the best coverage scenarios a coverage percentage of 100 was obtained, it was observed that when increasing the number of individuals, the number of iterations used was increased; thus, more time was used in each iteration. The PSO algorithm obtained higher coverage percentages than GA and pseudorandom generation. A main characteristic is that a fewer number of individuals or particles than GA are required. In the case of the BPSOr algorithm, the number of iterations required was less than PSO and GA in most of experiments; therefore, the verification time was reduced. Consequently, hybrid verification methods can improve the performance during the functional verification at block level of digital systems.

[6] Fine S, Ziv A. Coverage directed test generation for functional verification using bayesian networks. In: Proceedings of 2003 Design Automation Conference (IEEE Cat. No.03CH37451).

Functional Verification of Digital Systems Using Meta-Heuristic Algorithms

http://dx.doi.org/10.5772/intechopen.80048

91

[7] Cruz AM, Fernández RB, Lozano HM. Automated functional coverage for a digital system based on a binary differential evolution algorithm. In: 2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence.

[8] Chen W, Wang LC, Bhadra J, Abadir M. Simulation knowledge extraction and reuse in constrained random processor verification. In: 2013 50th ACM/EDAC/IEEE Design Auto-

[9] Katz Y, Rimon M, Ziv A, Shaked G. Learning microarchitectural behaviors to improve stimuli generation quality. In: 2011 48th ACM/EDAC/IEEE Design Automation Confer-

[10] Vasudevan S, Sheridan D, Patel S, Tcheng D, Tuohy B, Johnson D. Goldmine: Automatic assertion generation using data mining and static analysis. In: 2010 Design, Automation

[11] Xie T, Mueller W, Letombe F. Mutation-analysis driven functional verification of a soft microprocessor. In: 2012 IEEE International SOC Conference. Sept 2012. pp. 283-288 [12] Cruz AM, Fernández RB, Lozano HM, Ramírez Salinas MA, Villa Vargas LA. Automated functional test generation for digital systems through a compact binary differential evolu-

[13] Kennedy J, Eberhart RC. A discrete binary version of the particle swarm algorithm. In: 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational

Test in Europe Conference Exhibition (DATE 2010). March 2010. pp. 626-629

tion algorithm. Journal of Electronic Testing. Aug 2015;31(4):361-380

Cybernetics and Simulation. Vol. 5. Oct 1997. pp. 4104-4108

June 2003. pp. 286-291

Sept 2013. pp. 92-97

mation Conference (DAC). May 2013. pp. 1-6

ence (DAC). June 2011. pp. 848-853
