**Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm**

Walid Hassairi, Moncef Bousselmi, Mohamed Abid and Carlos Valderrama

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/46474

## **1. Introduction**

118 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 2

*Control Engineering Practice*, Vol.9, No.5, pp. 501-516, ISSN 0967-0661

ČVUT Publishing, ISBN 80-01-01939-X, Prague

28, ISBN 3-901608-24-9, Vienna, Austria, February 5-7, 2003

Time Delay Systems. *Kybernetika*, Vol.44, No.5, pp. 633-648, ISSN 0023-5954

*Mathématiques Pures et Appliquées*, Vol.7, 1928, pp. 249-298

1983, ISSN 0001-7043

0020-7179

Volterra, V. (1928). Sur le Théorie Mathématique des Phénomenès Héreditaires. *Journal des* 

Vyhlídal, T. & Zítek, P. (2001). Control System Design Based on a Universal First Order Model with Time Delays. *Acta Polytechnica*, Vol.44, No.4-5, 2001, pp. 49-53, ISSN 1210-2709 Zítek, P. (1983). Anisochronic State Theory of Dynamic Systems. *Acta Technica ČSAV*, Vol.4,

Zítek, P. (1997). Frequency Domain Synthesis of Hereditary Control Systems via Anisochronic State Space. *International Journal of Control*, Vol.66, No.4, pp. 539-556, ISSN

Zítek, P. & Hlava, J. (2001). Anisochronic Internal Model Control of Time-Delay Systems.

Zítek, P. & Kučera, V. (2003). Algebraic Design of Anisochronic Controllers for Time Delay Systems. *International Journal of Control*, Vol.76, No.16, pp. 1654-1665, ISSN 0020-7179 Zítek, P.; Kučera, V. & Vyhlídal, T. (2008). Meromorphic Observer-based Pole Assignment in

Zítek, P. & Víteček, A. (1999). *Control of Systems with Delays and Nonlinearities* (in Czech),

Zítek, P. & Vyhlídal, T. (2003). Low Order Time Delay Approximation of Conventional Linear Model, *Proceedings of 4th IMACS Symposium on Mathematical Modelling*, 2003, pp.

Zítek, P. & Vyhlídal, T. (2008). Argument-Increment Based Stability Criterion for Neutral Time Delay Systems, *Proceedings of the 16th Mediterranean Conference on Control and Automation*,

pp. 824-829, ISBN 978-1-4244-2505-1, Ajaccio, Corsica, France, June 25-27, 2008

The functionality of embedded systems as well as the time-to-market pressure has been continuously increasing in the past decades. Simulation of an entire system including both hardware and software from early design stages is one of the effective approaches to improve the design productivity. A large number of research efforts on hardware/software (HW/SW) co-simulation have been made so far. Real-time operating systems have become one of the important components in the embedded systems. However, in order to validate function of the entire system, this system has to be simulated together with application software and hardware. Indeed, traditional methods of verification have proven to be insufficient for complex digital systems. Register transfer level test-benches have become too complex to manage and too slow to execute. New methods and verification techniques began to emerge over the past few years. High-level test-benches, assertion-based verification, formal methods, hardware verification languages are just a few examples of the intense research activities driving the verification domain.

Our work articulates on three contributions which are the proposal for solutions to the implementation of the different parts of the architecture using SystemC and Matlab/Simulink simulators. Secondly, the definition of a co-simulation environment based on the automatic generation of the interfaces required to the integration of these simulators. Finally, the proposal of a new verification framework based on SystemC Verification standard that uses MATLAB/Simulink to accelerate the test-bench development. This chapter attempts to give a guide for the implementation of real-time control systems, using the **S-function** of matlab/Simulink, as a practical tool for students in control engineering. The MATLAB/Simulink to SystemC interface and the advanced version of transactors are combined in a scalable multi-abstraction level verification platform. The proposed refined co-simulation platform enables co-simulation with hardware models written in SystemC.

© 2012 Hassairi et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 Hassairi et al., licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

On that platform, application software and hardware modules are directly executed on a host computer, which leads to a high co-simulation speed. The MATLAB/SystemC interface is mainly used for the verification of the lower abstraction levels with a high level model of their execution environment.

Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm 121

This wrapper initializes the SystemC kernel and converts Simulink data type to SystemC signals and vice versa. Simulation control is entirely handled by Simulink. Some extensions of the SystemC kernel are required for initialization and simulation tasks. In [7], SystemC calls MATLAB using the engine library. MATLAB provides interfaces to external routines written in other programming languages. Using the C engine library, it is possible to share data between SystemC models and MATLAB. This simple working demo shows how to use the library to send, to retrieve data from the MATLAB workspace and to plot some results. The main difference with [6] is with the simulation control: SystemC is now the master of the simulation and MATLAB operates as a slave process.

In a similar way, MathWorks provides a commercial solution to close the gap between the algorithmic domain and the hardware design. The link for ModelSim [8] is a co-simulation interface that integrates MATLAB and Simulink into the hardware design flow. It provides a link between MATLAB/Simulink and Model Technology's HDL simulator, ModelSim. This interface makes the verification and co-simulation of RTL-level models possible from within MATLAB and Simulink. As opposed to the two previous techniques, there is no support for

These approaches [6, 7, 8] all try to reduce the barrier that exist between higher level modeling and existing hardware design flow. While [8] is a fully functional commercial tool for RTL verification, [6, 7] suffer from their embryonic stage (i.e. incomplete solutions for

The authors in [9] look at the problem of cosimulating continuous systems with discrete systems. The increasing complexity of continuous/discrete systems makes their simulation and validation a demanding task for the design of heterogeneous systems. They propose a co-simulation interface based on Simulink and SystemC. The main objective of the proposed solution is to provide a framework to evaluate continuous/discrete systems

In work [10], the authors have created a tool called: co-simulation COLIF that defines a subset of Matlab / simulink and combines a set of descriptive rules allows for the specification and functional validation efficient algorithms for the application. To reduce the "gap" between the functional model and architecture model in SystemC, they proposed a new intermediate transactional model in Simulink executable that combines both the algorithm and architecture in a single model representation. To validate their work, they applied to decoder MPEG Layer III. They found that the simulation model in Simulink is 50 times faster than the macro-level architecture. The difference is mainly due to the complexity of the description and details of the communication are present at the

In our former work [11], we adopted the methodology of communication and synchronization. To exchange data between a Simulink model and SystemC module, the cosimulation interface must integrate a bridge between the two simulators. This bridge is built

Also, Simulink is not supported in this example.

system level languages like SystemC.

hardware design and verification).

modeling and simulation.

macro architecture.

The integration of SystemC within MATLAB/Simulink and the resulting verification flow is tested on the JPEG compression algorithm. The required synchronization of both simulation environments, including data type conversion, is solved by using the proposed cosimulation flow. The application is divided into two JPEG encoder parts: the DCT (Direct Cosine Transform), the HW part implemented in SystemC, and the QEE (Quantization and Entropy Encoding), the SW part implemented in Matlab. With this research premise, this study introduces a new HW implementation of the DCT algorithm in SystemC. For the communication and synchronization between these two parts we use the S-Function and the MATLAB/Simulink engine. In addition, we compare the co-simulation results to a pure software simulation.

In this chapter, the related work is discussed in Section 2 and the proposed co-simulation methodology is presented in Section 3. Then, in Section 4, we propose the implementation of the JPEG image compression as a case study. We present the steps in matlab for the implementation of the JPEG algorithm. In Section 5, we summarize the proposed approach and co-simulation results. Finally, we sum up the proposal including suggestions and recommendations to future works.

## **2. Related work**

First of all, we present the chosen two simulators: Matlab and SystemC.

The MATLAB environment is a high-level technical computing language for algorithm development, data visualization, data analysis and numerical computing. One of the key features of this tool is the integration ability with other languages and third-party applications. MATLAB also included the Simulink graphical environment used for multidomain simulation and model-based design. Signal processing designers take advantage of Simulink as it offers a good platform for preliminary algorithmic exploration and optimization. A hardware designer doesn't like C/C++ environment because of:


The resulting modelling language is System C.

Connecting Simulink and SystemC together have already been tried in the literature. Authors in [6] propose a solution to integrate SystemC models in Simulink. A wrapper is created using S-Functions to combine SystemC modules with Simulink.

This wrapper initializes the SystemC kernel and converts Simulink data type to SystemC signals and vice versa. Simulation control is entirely handled by Simulink. Some extensions of the SystemC kernel are required for initialization and simulation tasks. In [7], SystemC calls MATLAB using the engine library. MATLAB provides interfaces to external routines written in other programming languages. Using the C engine library, it is possible to share data between SystemC models and MATLAB. This simple working demo shows how to use the library to send, to retrieve data from the MATLAB workspace and to plot some results. The main difference with [6] is with the simulation control: SystemC is now the master of the simulation and MATLAB operates as a slave process. Also, Simulink is not supported in this example.

120 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 2

their execution environment.

software simulation.

**2. Related work** 

and recommendations to future works.



The resulting modelling language is System C.

model (pins & signals)

On that platform, application software and hardware modules are directly executed on a host computer, which leads to a high co-simulation speed. The MATLAB/SystemC interface is mainly used for the verification of the lower abstraction levels with a high level model of

The integration of SystemC within MATLAB/Simulink and the resulting verification flow is tested on the JPEG compression algorithm. The required synchronization of both simulation environments, including data type conversion, is solved by using the proposed cosimulation flow. The application is divided into two JPEG encoder parts: the DCT (Direct Cosine Transform), the HW part implemented in SystemC, and the QEE (Quantization and Entropy Encoding), the SW part implemented in Matlab. With this research premise, this study introduces a new HW implementation of the DCT algorithm in SystemC. For the communication and synchronization between these two parts we use the S-Function and the MATLAB/Simulink engine. In addition, we compare the co-simulation results to a pure

In this chapter, the related work is discussed in Section 2 and the proposed co-simulation methodology is presented in Section 3. Then, in Section 4, we propose the implementation of the JPEG image compression as a case study. We present the steps in matlab for the implementation of the JPEG algorithm. In Section 5, we summarize the proposed approach and co-simulation results. Finally, we sum up the proposal including suggestions

The MATLAB environment is a high-level technical computing language for algorithm development, data visualization, data analysis and numerical computing. One of the key features of this tool is the integration ability with other languages and third-party applications. MATLAB also included the Simulink graphical environment used for multidomain simulation and model-based design. Signal processing designers take advantage of Simulink as it offers a good platform for preliminary algorithmic exploration and


Connecting Simulink and SystemC together have already been tried in the literature. Authors in [6] propose a solution to integrate SystemC models in Simulink. A wrapper is

First of all, we present the chosen two simulators: Matlab and SystemC.

optimization. A hardware designer doesn't like C/C++ environment because of:



created using S-Functions to combine SystemC modules with Simulink.

In a similar way, MathWorks provides a commercial solution to close the gap between the algorithmic domain and the hardware design. The link for ModelSim [8] is a co-simulation interface that integrates MATLAB and Simulink into the hardware design flow. It provides a link between MATLAB/Simulink and Model Technology's HDL simulator, ModelSim. This interface makes the verification and co-simulation of RTL-level models possible from within MATLAB and Simulink. As opposed to the two previous techniques, there is no support for system level languages like SystemC.

These approaches [6, 7, 8] all try to reduce the barrier that exist between higher level modeling and existing hardware design flow. While [8] is a fully functional commercial tool for RTL verification, [6, 7] suffer from their embryonic stage (i.e. incomplete solutions for hardware design and verification).

The authors in [9] look at the problem of cosimulating continuous systems with discrete systems. The increasing complexity of continuous/discrete systems makes their simulation and validation a demanding task for the design of heterogeneous systems. They propose a co-simulation interface based on Simulink and SystemC. The main objective of the proposed solution is to provide a framework to evaluate continuous/discrete systems modeling and simulation.

In work [10], the authors have created a tool called: co-simulation COLIF that defines a subset of Matlab / simulink and combines a set of descriptive rules allows for the specification and functional validation efficient algorithms for the application. To reduce the "gap" between the functional model and architecture model in SystemC, they proposed a new intermediate transactional model in Simulink executable that combines both the algorithm and architecture in a single model representation. To validate their work, they applied to decoder MPEG Layer III. They found that the simulation model in Simulink is 50 times faster than the macro-level architecture. The difference is mainly due to the complexity of the description and details of the communication are present at the macro architecture.

In our former work [11], we adopted the methodology of communication and synchronization. To exchange data between a Simulink model and SystemC module, the cosimulation interface must integrate a bridge between the two simulators. This bridge is built

with two Simulink S-Functions. An S-Function is a computer language description of a Simulink block. It uses syntax of call allowing us to interact with Simulink solvers. For our bridge, we create two C++ S-Functions.

Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm 123

**Figure 1.** Integrated SystemC in Simulink S-Function.

flexibility and configurability.

So, our methodology tries to push the idea a step further than just a co-simulation interface. It is a complete verification solution. It uses MATLAB external interfaces, similar to the example described in [6], to exchange data between SystemC and Simulink. Once this link is established, it opens up a wide range of additional capability to SystemC, like stimulus generation and data visualization [10]. We also based our methodology on a portion of the methodology in the work [11]. In this work, they are based on the transformation of a task in SystemC. The first advantage of our technique is to use the right tool for the right task. Complex stimulus generation and signal processing visualization are carried out with MATLAB and Simulink while hardware verification is performed with SystemC verification standard. The second advantage is to have a SystemC centric approach allowing greater

The representation of simulation time differs significantly from SystemC and Matlab. SystemC is cycle-based simulator and simulation occurs at multiples of the SystemC resolution limit. The default time resolution is one picosecond. This limit can be changed with the function sc\_set\_time\_resolution. However, Simulink maintains simulation time as a double precision value scaled to seconds. Thus, our co-simulation interface uses a one-toone correspondence between simulation time in Simulink and SystemC.

## **3. Methodologies**

The implementation of applications on embedded systems is a very time expensive task using the standard development tools. The proposed heterogeneous model is also executable to simulate the co-design implementation. Such simulation of the heterogeneous model is realized using SystemC. In fact, a description of a hardware module is transformed into a structural description with SystemC components (RT-level). Then, the interface between hardware and software parts is implemented using special SystemC constructs. This interface can be compared with the interface of the implementation in the real system. SystemC provides several levels of abstraction to describe hardware. For the simulation of hardware modules in the shown design flow given by figure Fig1, the cycle accurate level (CA) of SystemC is used. The interface to the software kernel is untimed functional level (UTF). A wrapper was designed to connect the modules to the software kernel. This wrapper is based on two shell-blocks which connect the CA-model to the software kernel by realizing an interface between the CA- and the UTF-model (Untimed Functional) of SystemC.

Simulink is a commonly used tool for designing DSP applications. It supports with a lot of libraries distinguished suppositions to develop single machine vision operators, e.g. the possibility to generate intelligent test environments for image. To use the tool for generation of hardware operators, an interface between SystemC and Simulink was developed. Thus, the visualized tool in more common design flows is integrated using Simulink S-Functions. Those Functions provide a powerful mechanism for extending Simulink with custom blocks and can be implemented as C++ Code. Within the S-Function the output is calculated from input and from states at each time step using a cycle by cycle SystemC-simulation as a fixed-step discrete time solver. The initialization of the SystemC kernel should be separated from simulation.

To meet these requirements a wrapper has been inserted between the S-Function and the SystemC model (Fig. 1). The wrapper functionalities are:


**Figure 1.** Integrated SystemC in Simulink S-Function.

one correspondence between simulation time in Simulink and SystemC.

bridge, we create two C++ S-Functions.

**3. Methodologies** 

with two Simulink S-Functions. An S-Function is a computer language description of a Simulink block. It uses syntax of call allowing us to interact with Simulink solvers. For our

The representation of simulation time differs significantly from SystemC and Matlab. SystemC is cycle-based simulator and simulation occurs at multiples of the SystemC resolution limit. The default time resolution is one picosecond. This limit can be changed with the function sc\_set\_time\_resolution. However, Simulink maintains simulation time as a double precision value scaled to seconds. Thus, our co-simulation interface uses a one-to-

The implementation of applications on embedded systems is a very time expensive task using the standard development tools. The proposed heterogeneous model is also executable to simulate the co-design implementation. Such simulation of the heterogeneous model is realized using SystemC. In fact, a description of a hardware module is transformed into a structural description with SystemC components (RT-level). Then, the interface between hardware and software parts is implemented using special SystemC constructs. This interface can be compared with the interface of the implementation in the real system. SystemC provides several levels of abstraction to describe hardware. For the simulation of hardware modules in the shown design flow given by figure Fig1, the cycle accurate level (CA) of SystemC is used. The interface to the software kernel is untimed functional level (UTF). A wrapper was designed to connect the modules to the software kernel. This wrapper is based on two shell-blocks which connect the CA-model to the software kernel by realizing

an interface between the CA- and the UTF-model (Untimed Functional) of SystemC.

Simulink is a commonly used tool for designing DSP applications. It supports with a lot of libraries distinguished suppositions to develop single machine vision operators, e.g. the possibility to generate intelligent test environments for image. To use the tool for generation of hardware operators, an interface between SystemC and Simulink was developed. Thus, the visualized tool in more common design flows is integrated using Simulink S-Functions. Those Functions provide a powerful mechanism for extending Simulink with custom blocks and can be implemented as C++ Code. Within the S-Function the output is calculated from input and from states at each time step using a cycle by cycle SystemC-simulation as a fixed-step discrete time solver. The initialization of the SystemC kernel should be separated from simulation.

To meet these requirements a wrapper has been inserted between the S-Function and the

SystemC model (Fig. 1). The wrapper functionalities are: connecting Simulink ports to a SystemC-TM-Block,

initializing of the SystemC-Kernel,

converting Simulink data types to SystemC-TM signals and vice versa,

 converting events; function call from Simulink to sc\_cycle(), providing a DLL interface to the Simulink S-Function.

So, our methodology tries to push the idea a step further than just a co-simulation interface. It is a complete verification solution. It uses MATLAB external interfaces, similar to the example described in [6], to exchange data between SystemC and Simulink. Once this link is established, it opens up a wide range of additional capability to SystemC, like stimulus generation and data visualization [10]. We also based our methodology on a portion of the methodology in the work [11]. In this work, they are based on the transformation of a task in SystemC. The first advantage of our technique is to use the right tool for the right task. Complex stimulus generation and signal processing visualization are carried out with MATLAB and Simulink while hardware verification is performed with SystemC verification standard. The second advantage is to have a SystemC centric approach allowing greater flexibility and configurability.

With this approach the overall system simulation can be controlled by Simulink through settings of duration time and step size.

Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm 125

module is formed by an input port and an output port of type 'long int'. The task has a

However, the figure 3 shows the main file. "cpp '. The main calculation is done to the body of this task. The communication of this module with the system is through the interfaces represented by the ports of entry and exit 'DATA\_IN1'and 'DATA\_OUT1' by

service port 'SAP', which allows synchronization of tasks in the co-simulation.

**Figure 2.** Example of a file header. "h" has a corresponding TASK SystemC.

**Figure 3.** Example of a file header. "cpp" has a corresponding task SystemC.

**3.1. Transformation the S-functions of Simulink in task SystemC** 

SystemC is used by the synthesis tools and co-simulation in the stream of conception flow of the proposed heterogeneous Systems. The conception process always begins with the specification of the application in the Simulink environment using S-Functions blocks. The S-Functions are developed in language C according to precise rules and through methods decided by the Simulink simulator. An S-Function is formed by four essential methods. In our work, a block S-Function will be converted in a module in SystemC trained by a ' thread ' sensitive to a signal ' SAP '. The file S-function C will be processed in a direct

means of APIs defined in the library.

There are three new call-backs provided via virtual methods for classes derived from sc\_module, sc\_port, sc\_export, and sc\_prim\_channel. These call-backs will be invoked by the SystemC simulation kernel when certain phases of the simulation process occur. The new methods are:

```
void before_end_of_elaboration();
```
This method is called just before the end of elaboration processing is to be done by the simulator.

void start\_of\_simulation();

This method is called just before the start of simulation. It is intended to allow users to set up variable traces and other verification functions that should be done at the start of simulation.

```
void end_of_simulation();
```
If a call to sc\_stop() had been made this method will be called as part of the clean up process as the simulation ends. It is intended to allow users to perform final outputs, close files, storage, etc.

It is also possible to test whether the callbacks to the start\_of\_simulation methods or end\_of\_simulation methods have occurred. The Boolean functions sc\_start\_of\_simulation\_invoked() and sc\_end\_of\_simulation\_invoked() will return true if their respective callbacks have occurred.

The tasks at the transactional level under Simulink are included in a software knot represented by a sub-system having the prefix ' SW\_ ' in its name. These tasks are modelled under Simulink in several ways.

They can be trained by a merger of several blocks in one under system having the name preceded by the prefix ' TASK \_ ' either they are trained by individual blocks. These last ones, in turn can be predefined blocks of the library either Functions modelled in language C.

In what follows, the modelling of the tasks in SystemC will be explained before describing the various manners admitted to transform the tasks of transactional Simulink into tasks described in SystemC.

For the modelling and description of the tasks in SystemC, we used the notion of "SC\_MODULE". A module can be hierarchical containing the other modules, or elementary containing an active or passive behaviour using the elementary modules "SC\_CTHREAD". On the other hand, the communication is determined through an interface of communication. This last one is described through a set of ports which can be inputs, output or inputs / output ones. SystemC also supplies a specific port for the modelling of a physical clock. The figure 2 shows the header file of a task described in SystemC. The interface of this module is formed by an input port and an output port of type 'long int'. The task has a service port 'SAP', which allows synchronization of tasks in the co-simulation.

124 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 2

settings of duration time and step size.

void before\_end\_of\_elaboration();

void start\_of\_simulation();

void end\_of\_simulation();

their respective callbacks have occurred.

under Simulink in several ways.

described in SystemC.

new methods are:

simulator.

simulation.

storage, etc.

With this approach the overall system simulation can be controlled by Simulink through

There are three new call-backs provided via virtual methods for classes derived from sc\_module, sc\_port, sc\_export, and sc\_prim\_channel. These call-backs will be invoked by the SystemC simulation kernel when certain phases of the simulation process occur. The

This method is called just before the end of elaboration processing is to be done by the

This method is called just before the start of simulation. It is intended to allow users to set up variable traces and other verification functions that should be done at the start of

If a call to sc\_stop() had been made this method will be called as part of the clean up process as the simulation ends. It is intended to allow users to perform final outputs, close files,

It is also possible to test whether the callbacks to the start\_of\_simulation methods or end\_of\_simulation methods have occurred. The Boolean functions sc\_start\_of\_simulation\_invoked() and sc\_end\_of\_simulation\_invoked() will return true if

The tasks at the transactional level under Simulink are included in a software knot represented by a sub-system having the prefix ' SW\_ ' in its name. These tasks are modelled

They can be trained by a merger of several blocks in one under system having the name preceded by the prefix ' TASK \_ ' either they are trained by individual blocks. These last ones,

In what follows, the modelling of the tasks in SystemC will be explained before describing the various manners admitted to transform the tasks of transactional Simulink into tasks

For the modelling and description of the tasks in SystemC, we used the notion of "SC\_MODULE". A module can be hierarchical containing the other modules, or elementary containing an active or passive behaviour using the elementary modules "SC\_CTHREAD". On the other hand, the communication is determined through an interface of communication. This last one is described through a set of ports which can be inputs, output or inputs / output ones. SystemC also supplies a specific port for the modelling of a physical clock. The figure 2 shows the header file of a task described in SystemC. The interface of this

in turn can be predefined blocks of the library either Functions modelled in language C.

**Figure 2.** Example of a file header. "h" has a corresponding TASK SystemC.

However, the figure 3 shows the main file. "cpp '. The main calculation is done to the body of this task. The communication of this module with the system is through the interfaces represented by the ports of entry and exit 'DATA\_IN1'and 'DATA\_OUT1' by means of APIs defined in the library.

**Figure 3.** Example of a file header. "cpp" has a corresponding task SystemC.

#### **3.1. Transformation the S-functions of Simulink in task SystemC**

SystemC is used by the synthesis tools and co-simulation in the stream of conception flow of the proposed heterogeneous Systems. The conception process always begins with the specification of the application in the Simulink environment using S-Functions blocks. The S-Functions are developed in language C according to precise rules and through methods decided by the Simulink simulator. An S-Function is formed by four essential methods. In our work, a block S-Function will be converted in a module in SystemC trained by a ' thread ' sensitive to a signal ' SAP '. The file S-function C will be processed in a direct

manner in a header file and the implementation file in C + +. To understand better the transformation of one S-Function into a task, we divided into four parts.

Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm 127

**Figure 4.** Generating a task from a basic block.

our methodology.

**3.3. Fusion of several blocks Simulink in one task SystemC** 

module in the virtual architecture of our methodology.

In the case where several units are grouped in a subsystem representing a task whose name is prefixed with 'TASK\_' the generation of the task SystemC is by assembling several library functions into a single task SystemC. Functions have the same names of the blocks. These functions exchange data via common variables. Communication with the system 'inter\_Thread' is via the APIs generated following the protocol communication defined in

Figure 5 illustrates the merger of several blocks in Simulink transactional to generate a task in SystemC. The functions of the library F0 (), F1 () have the same names as the blocks F0, F1. The generation of APIs is done by identifying the type of protocol in each port of the

In the first part, we define global variables and we include the header files. 'H'. **S-function**: header files of the library of Simulink (Simstruct.h ...) macros, header files of the code, and global variables are defined. **SystemC**: The header files of the SystemC library, macros, code header files and global variables are defined.

In the second part, the initialization of variables and definition of input ports and output are included in this section. **S-function**: This part is formed by the method mdlInitializeSizes (SimStruct \* S) where variables are initialized, and the number and size of ports of entry and exit are defined. **SystemC**: This part is divided on the header file and implementation file for SystemC. In the first type of port is defined. In the second module ports are declared and initialized. The type of the port depends on the type of communication used by the port (Shared memory, FIFO, signal synchronization).

In the third part, the APIs and the communication are the main calculation developed in this part along a loop that is repeated several times**. S-function**: Method mdloutput (SimStruct \*S) is used in this part. The main calculation of the block is made. The data to be transmitted are affected ports by using the operator "=". This is a communication primitive. **SystemC**: The loop for (;;) in the implementation file contains the main calculation module. The calculation code in C is similar to that of the S-function.

The difference in this part occurs at the level of communication primitives. In S-function, a reading and writing data port is through the assignment operator "=". In SystemC there are two types of communication primitives:


In the final part, there is the last part that runs at the end of the simulation. **S-function**: This part is formed by the method mdlterminate (SimStruct \* S). **SystemC:** This part is after the end of the loop for (;;) of Part III and the end of the module.

## **3.2. Creating a task from a SystemC predefined block in the Simulink library**

In the case of an elementary block a different type of S-function included in a software node (a subsystem with the prefix 'SW\_'), the generation of the tasks SystemC is made from a bookshop of functions describing the behaviour of all the blocks Simulink used in the application.

Each function has the same name as the Simulink block and the corresponding module in our methodology. However, reading and writing data are specific through the APIs to each communication protocol. These APIs exist in the communication library. The type of communication protocol is identified in the 'Port' of each module in our methodology. Figure 4 shows the generation of a task in SystemC from an individual block in Simulink transaction, this block is transformed into a parameterized module under our methodology.

**Figure 4.** Generating a task from a basic block.

126 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 2

transformation of one S-Function into a task, we divided into four parts.

macros, code header files and global variables are defined.

port (Shared memory, FIFO, signal synchronization).

calculation code in C is similar to that of the S-function.


end of the loop for (;;) of Part III and the end of the module.

two types of communication primitives:

application.

manner in a header file and the implementation file in C + +. To understand better the

In the first part, we define global variables and we include the header files. 'H'. **S-function**: header files of the library of Simulink (Simstruct.h ...) macros, header files of the code, and global variables are defined. **SystemC**: The header files of the SystemC library,

In the second part, the initialization of variables and definition of input ports and output are included in this section. **S-function**: This part is formed by the method mdlInitializeSizes (SimStruct \* S) where variables are initialized, and the number and size of ports of entry and exit are defined. **SystemC**: This part is divided on the header file and implementation file for SystemC. In the first type of port is defined. In the second module ports are declared and initialized. The type of the port depends on the type of communication used by the

In the third part, the APIs and the communication are the main calculation developed in this part along a loop that is repeated several times**. S-function**: Method mdloutput (SimStruct \*S) is used in this part. The main calculation of the block is made. The data to be transmitted are affected ports by using the operator "=". This is a communication primitive. **SystemC**: The loop for (;;) in the implementation file contains the main calculation module. The

The difference in this part occurs at the level of communication primitives. In S-function, a reading and writing data port is through the assignment operator "=". In SystemC there are

In the final part, there is the last part that runs at the end of the simulation. **S-function**: This part is formed by the method mdlterminate (SimStruct \* S). **SystemC:** This part is after the

**3.2. Creating a task from a SystemC predefined block in the Simulink library** 

In the case of an elementary block a different type of S-function included in a software node (a subsystem with the prefix 'SW\_'), the generation of the tasks SystemC is made from a bookshop of functions describing the behaviour of all the blocks Simulink used in the

Each function has the same name as the Simulink block and the corresponding module in our methodology. However, reading and writing data are specific through the APIs to each communication protocol. These APIs exist in the communication library. The type of communication protocol is identified in the 'Port' of each module in our methodology. Figure 4 shows the generation of a task in SystemC from an individual block in Simulink transaction, this block is transformed into a parameterized module under our methodology.

## **3.3. Fusion of several blocks Simulink in one task SystemC**

In the case where several units are grouped in a subsystem representing a task whose name is prefixed with 'TASK\_' the generation of the task SystemC is by assembling several library functions into a single task SystemC. Functions have the same names of the blocks. These functions exchange data via common variables. Communication with the system 'inter\_Thread' is via the APIs generated following the protocol communication defined in our methodology.

Figure 5 illustrates the merger of several blocks in Simulink transactional to generate a task in SystemC. The functions of the library F0 (), F1 () have the same names as the blocks F0, F1. The generation of APIs is done by identifying the type of protocol in each port of the module in the virtual architecture of our methodology.

Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm 129

��� (1)

Decompression is an inverse process that performs the individual inverse of all the above

At the input to the encoder, source image samples are grouped into 8x8 blocks, shifted from unsigned integers with range [0, 27 - 1] to signed integers with range [-27-1, 27ˉ¹-1], and input to the Forward DCT (FDCT). At the output from the decoder, the Inverse DCT (IDCT) outputs 8x8 sample blocks to form the reconstructed image. The following equations are the

�(�� �) = 1/4�(�)�(�)[∑ ∑ �(�� �) � ����(�� � 1)��� /1� � ����(�� � 1)��/1�� �

<sup>1</sup> , 0 <sup>2</sup>

*where u v*

<sup>1</sup> , 0, 0

*where u v*

<sup>1</sup> , 0, 0

*atherwise*

��� . (2)

*where u v*

idealized mathematical definitions of the 8x8 FDCT and 8x8 IDCT:

��� �

*x y*

, 0,1...7

(,) 2

*cuv*

��� �

zero or near-zero amplitude and need not be encoded.

64-point input signal.

2 1,

point discrete signal which is a function of the two spatial dimensions x and y. The

�(�� �) = 1/4[ ∑ ∑ �(�)�(�)�(�� �) � ����(�� � 1)��� /1� � ��� ((�� � 1)��)/1�� �

The DCT is related to the Discrete Fourier Transform (DFT). Some simple intuition for DCTbased compression can be obtained by viewing the FDCT as a harmonic analyzer and the IDCT as a harmonic synthesizer. Each 8x8 block of source image samples is effectively a 64-

FDCT takes such a signal as its input and decomposes it into 64 orthogonal basis signals. Each contains one of the 64 unique two-dimensional (2D) "spatial frequencies'' which comprise the input signal's "spectrum." The ouput of the FDCT is the set of 64 basis-signal amplitudes or "DCT coefficients" whose values are uniquely determined by the particular

The DCT coefficient values can thus be regarded as the relative amount of the 2D spatial frequencies contained in the 64-point input signal. The coefficient with zero frequency in both dimensions is called the "DC coefficient" and the remaining 63 coefficients are called the "AC coefficients.'' Because sample values typically vary slowly from point to point across an image, the FDCT processing step lays the foundation for achieving data compression by concentrating most of the signal in the lower spatial frequencies. For a typical 8x8 sample block from a typical source image, most of the spatial frequencies have

processes.

**4.1. 8x8 FDCT and IDCT** 

**Figure 5.** Generating a task from a set of blocks in Simulink.

## **4. JPEG compression algorithm**

The baseline JPEG compression algorithm is the most basic form of sequential DCT based compression [12]. The process of JPEG-based encoding and decoding of images vary according to color depth (8, 24 or 32 bits). However, the basic ideology for all color depths is same. The bitmap image stores raw pixel-by-pixel color values. In addition, 54 bytes are stored at the start of file as header information that includes image width and height, image file size, image color depth, etc. These 54 bytes must be taken into account whenever working with the bitmap images. Following the 54-byte header, the bitmap image holds the color values of each pixel that varies for different color depths. For an 8-bit image, this is simply one byte (8-bits) per pixel and for a 32-bit image; they are 4 bytes per pixel. For 8-bit pixels, the pre-processing stage divides image data into 8x8 blocks that are shifted from unsigned integers with range [0, 28 – 1] to signed integers with a range of [–27, 27 – 1] and then individually compressed at the 8x8 block level. The compression process for each block goes through the following processes in addition to preprocessing.


Decompression is an inverse process that performs the individual inverse of all the above processes.

#### **4.1. 8x8 FDCT and IDCT**

128 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 2

**Figure 5.** Generating a task from a set of blocks in Simulink.

goes through the following processes in addition to preprocessing.

The baseline JPEG compression algorithm is the most basic form of sequential DCT based compression [12]. The process of JPEG-based encoding and decoding of images vary according to color depth (8, 24 or 32 bits). However, the basic ideology for all color depths is same. The bitmap image stores raw pixel-by-pixel color values. In addition, 54 bytes are stored at the start of file as header information that includes image width and height, image file size, image color depth, etc. These 54 bytes must be taken into account whenever working with the bitmap images. Following the 54-byte header, the bitmap image holds the color values of each pixel that varies for different color depths. For an 8-bit image, this is simply one byte (8-bits) per pixel and for a 32-bit image; they are 4 bytes per pixel. For 8-bit pixels, the pre-processing stage divides image data into 8x8 blocks that are shifted from unsigned integers with range [0, 28 – 1] to signed integers with a range of [–27, 27 – 1] and then individually compressed at the 8x8 block level. The compression process for each block

**4. JPEG compression algorithm** 

Discrete Cosine Transform (DCT)

Entropy Encoding (commonly Huffman)

 Quantization Zigzag

At the input to the encoder, source image samples are grouped into 8x8 blocks, shifted from unsigned integers with range [0, 27 - 1] to signed integers with range [-27-1, 27ˉ¹-1], and input to the Forward DCT (FDCT). At the output from the decoder, the Inverse DCT (IDCT) outputs 8x8 sample blocks to form the reconstructed image. The following equations are the idealized mathematical definitions of the 8x8 FDCT and 8x8 IDCT:

$$F(\mathbf{u}, \upsilon) = 1/4C(\mathbf{u})C(\upsilon)[\sum\_{\mathbf{x}=0}^{7} \Sigma\_{\mathbf{y}=0}^{7} f(\mathbf{x}, \mathbf{y}) \ast \cos\{(2\mathbf{x}+1)\mathbf{u}\pi\} / 16 \ast \cos\{(2\mathbf{y}+1)\upsilon\pi/16\} \tag{1}$$

$$\propto\_{\prime} y = 0, 1...7$$

$$c(u,v) = \begin{cases} \frac{1}{2} \\ \frac{1}{\sqrt{2}} \operatorname{where} , u = 0 \text{ or } \operatorname{v} \neq 0 \\ \frac{1}{\sqrt{2}} \operatorname{where} , u = 0 \text{ or } \operatorname{v} = 0 \\ \frac{1}{\sqrt{2}} \operatorname{where} , u \neq 0 \text{ or } = 0 \\ 1 \text{,} , \operatorname{otherwise} \end{cases}$$

$$f(\mathbf{x}, \mathbf{y}) = 1/4 \{ \Sigma\_{\mathbf{u}=0}^{7} \Sigma\_{\mathbf{v}=0}^{7} \mathbb{C}(\mathbf{u}) \mathbb{C}(\mathbf{v}) F(\mathbf{u}, \mathbf{v}) \ast \cos \{ (2\mathbf{x} + 1)\mathbf{u}\pi \} / 16 \ast \cos \{ (2\mathbf{y} + 1)\mathbf{v}\pi \} / 16 \} \tag{2}$$

The DCT is related to the Discrete Fourier Transform (DFT). Some simple intuition for DCTbased compression can be obtained by viewing the FDCT as a harmonic analyzer and the IDCT as a harmonic synthesizer. Each 8x8 block of source image samples is effectively a 64 point discrete signal which is a function of the two spatial dimensions x and y. The

FDCT takes such a signal as its input and decomposes it into 64 orthogonal basis signals. Each contains one of the 64 unique two-dimensional (2D) "spatial frequencies'' which comprise the input signal's "spectrum." The ouput of the FDCT is the set of 64 basis-signal amplitudes or "DCT coefficients" whose values are uniquely determined by the particular 64-point input signal.

The DCT coefficient values can thus be regarded as the relative amount of the 2D spatial frequencies contained in the 64-point input signal. The coefficient with zero frequency in both dimensions is called the "DC coefficient" and the remaining 63 coefficients are called the "AC coefficients.'' Because sample values typically vary slowly from point to point across an image, the FDCT processing step lays the foundation for achieving data compression by concentrating most of the signal in the lower spatial frequencies. For a typical 8x8 sample block from a typical source image, most of the spatial frequencies have zero or near-zero amplitude and need not be encoded.

Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm 131

( , ) ( , )\* ( , ) *Q Q F uv F uv Quv* (4)

This output value is normalized by the quantizer step size. Dequantization is the inverse function, simply means in this case that the normalization is removed by multiplying by the step size, which returns the result to a representation appropriate for input to the IDCT:

When the aim is to compress the image as much as possible without visible artifacts, each step size ideally should be chosen as the perceptual threshold or "just noticeable difference" for the visual contribution of its corresponding cosine basis function. These thresholds are also functions of the source image characteristics, display characteristics and viewing distance. For applications in which these variables can be reasonably well defined, psycho

After quantization, the DC coefficient is treated separately from the 63 AC coefficients. The DC coefficient is a measure of the average value of the 64 image samples. Because there is usually strong correlation between the DC coefficients of adjacent 8x8 blocks, the quantized DC coefficient is encoded as the difference from the DC term of the previous block in the encoding order (defined in the following), as shown in Figure 7. This special treatment is worthwhile, as

Finally, all of the quantized coefficients are ordered into the "zig-zag" sequence, also shown in Figure 7. This ordering helps to facilitate entropy coding by placing low-frequency

Huffman coding is a technique which will assign a variable length codeword to an input data item. Huffman coding assigns a smaller codeword to an input that occurs more

coefficients (which are more likely to be nonzero) before high-frequency coefficients.

visual experiments can be performed to determine the best thresholds.

DC coefficients frequently contain a significant fraction of the total image energy.

**Figure 7.** Preparation of Quantized Coefficients for Entropy Coding

**4.4. Entropy coding\Huffman** 

**4.3. DC Coding and Zig-Zag sequence** 

**Figure 6.** The JPEG decoder.

At the decoder the IDCT reverses this processing step. It takes the 64 DCT coefficients (which at that point have been quantized) and reconstructs a 64-point ouput image signal by summing the basis signals. Mathematically, the DCT is one-to-one mapping for 64-point vectors between the image and the frequency domains. If the FDCT and IDCT could be computed with perfect accuracy and if the DCT coefficients were not quantized as in the following description, the original 64-point signal could be exactly recovered. In principle, the DCT introduces no loss to the source image samples; it merely transforms them to a domain in which they can be more efficiently encoded. Some properties of practical FDCT and IDCT implementations raise the issue of what precisely should be required by the JPEG standard. A fundamental property is that the FDCT and IDCT equations contain transcendental functions.

### **4.2. Quantization**

After output from the FDCT, each of the 64 DCT coefficients is uniformly quantized in conjunction with a 64-element Quantization Table, which must be specified by the application (or user) as an input to the encoder. Each element can be any integer value from 1 to 255, which specifies the step size of the quantizer for its corresponding DCT coefficient. The purpose of quantization is to achieve further compression by representing DCT coefficients with no greater precision than is necessary to achieve the desired image quality. Stated another way, the goal of this processing step is to discard information which is not visually significant. Quantization is a many-to-one mapping, and therefore is fundamentally lossy. It is the principal source of lossiness in DCT-based encoders.

Quantization is defined as division of each DCT coefficient by its corresponding quantizer step size, followed by rounding to the nearest integer:

$$F^{\mathbb{Q}}(\mu, \upsilon) = \operatorname{IntegerRound} \left( \frac{F(\mu, \upsilon)}{Q(\mu, \upsilon)} \right) \tag{3}$$

This output value is normalized by the quantizer step size. Dequantization is the inverse function, simply means in this case that the normalization is removed by multiplying by the step size, which returns the result to a representation appropriate for input to the IDCT:

$$F^{\mathbb{Q}}(\mu, \upsilon) = F^{\mathbb{Q}}(\mu, \upsilon) \* \mathbb{Q}(\mu, \upsilon) \tag{4}$$

When the aim is to compress the image as much as possible without visible artifacts, each step size ideally should be chosen as the perceptual threshold or "just noticeable difference" for the visual contribution of its corresponding cosine basis function. These thresholds are also functions of the source image characteristics, display characteristics and viewing distance. For applications in which these variables can be reasonably well defined, psycho visual experiments can be performed to determine the best thresholds.

#### **4.3. DC Coding and Zig-Zag sequence**

130 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 2

At the decoder the IDCT reverses this processing step. It takes the 64 DCT coefficients (which at that point have been quantized) and reconstructs a 64-point ouput image signal by summing the basis signals. Mathematically, the DCT is one-to-one mapping for 64-point vectors between the image and the frequency domains. If the FDCT and IDCT could be computed with perfect accuracy and if the DCT coefficients were not quantized as in the following description, the original 64-point signal could be exactly recovered. In principle, the DCT introduces no loss to the source image samples; it merely transforms them to a domain in which they can be more efficiently encoded. Some properties of practical FDCT and IDCT implementations raise the issue of what precisely should be required by the JPEG standard. A fundamental property is that the FDCT and IDCT equations contain

After output from the FDCT, each of the 64 DCT coefficients is uniformly quantized in conjunction with a 64-element Quantization Table, which must be specified by the application (or user) as an input to the encoder. Each element can be any integer value from 1 to 255, which specifies the step size of the quantizer for its corresponding DCT coefficient. The purpose of quantization is to achieve further compression by representing DCT coefficients with no greater precision than is necessary to achieve the desired image quality. Stated another way, the goal of this processing step is to discard information which is not visually significant. Quantization is a many-to-one mapping, and therefore is fundamentally

Quantization is defined as division of each DCT coefficient by its corresponding quantizer

(,) (,) (,) *<sup>Q</sup> Fuv F u v IntegerRound Quv*

(3)

lossy. It is the principal source of lossiness in DCT-based encoders.

step size, followed by rounding to the nearest integer:

**Figure 6.** The JPEG decoder.

transcendental functions.

**4.2. Quantization** 

After quantization, the DC coefficient is treated separately from the 63 AC coefficients. The DC coefficient is a measure of the average value of the 64 image samples. Because there is usually strong correlation between the DC coefficients of adjacent 8x8 blocks, the quantized DC coefficient is encoded as the difference from the DC term of the previous block in the encoding order (defined in the following), as shown in Figure 7. This special treatment is worthwhile, as DC coefficients frequently contain a significant fraction of the total image energy.

**Figure 7.** Preparation of Quantized Coefficients for Entropy Coding

Finally, all of the quantized coefficients are ordered into the "zig-zag" sequence, also shown in Figure 7. This ordering helps to facilitate entropy coding by placing low-frequency coefficients (which are more likely to be nonzero) before high-frequency coefficients.

#### **4.4. Entropy coding\Huffman**

Huffman coding is a technique which will assign a variable length codeword to an input data item. Huffman coding assigns a smaller codeword to an input that occurs more

frequently. It is very similar to Morse code, which assigned smaller pulse combinations to letters that occurred more frequently. Huffman coding is variable length coding, where characters are not coded to a fixed number of bits.

Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm 133

**Figure 8.** Implementing the JPEG algorithm.

**Figure 9.** Choosing the video.

Matlab let us to choose the video when we click on the video source. A window is opened

A click on the Block Profession opens window. In this window, there are the parameters of this block as a number of input, in our case , we put 1, number of output, in our case, we put 2 and two parameters are block size and overlap. A click the open Subsystem opens another window opens in which we find the block that we have just parameterize as indicated in figure 10.

where we specify the video place and its parameter as it is presented in figure 9.

This is the last step in the encoding process. It organizes the data stream into a smaller number of output data packets by assigning unique codewords that later during decompression can be reconstructed without loss. For the JPEG process, each combination of run length and size category, from the run length coder, are assigned a Huffman codeword.

## **4.5. Decomposition and implementation of the JPEG algorithm**

It is possible to increase speed and to reduce power consumption by running portions of the algorithm implemented in the custom hardware. To do this, parts of the algorithm remains the SW and the other part goes to HW area and must be well chosen. This is called hardware partitioning software (HW / SW partitioning). Many factors must be considered when the HW / SW partitioning is done. The problem is to use the right amount of material. Using too much material implies a rise in costs and probably increase the time of placing on the market.

The first step in a HW / SW partitioning is to identify the parts of the algorithm that consumes a lot of time if left in the software or by the implementation of the algorithm entirely in software or perform estimates on the number of cycles. The next step is to evaluate and decide which parts need to be moved to the HW area. It is important to take into account more things than just a party that consumes more cycles of the software. Perhaps it is better to leave this part of computation in software intensive and move some other parts in HW, the parts that are better suited for hardware implementation. This is of course possible only if time constraints may even now be suffering the most intense in the software calculation.

To make a good HW / SW partitioning a simulation tool is needed where much can be moved from HW field to SW field and vice versa. In addition, it should be possible to specify the execution time for different parts. This part of the design process is important and time spent here is well spent and often reduces the work in phases. If the processor architecture also must be chosen in the design process, the problem becomes even more complicated. With a more powerful processor, it is probably possible to do more in software and thus reduce the cost of designing and manufacturing the hardware. The question then is of course how this affects the total cost. The entire HW / SW partitioning problem is an optimization problem where constraints are typical on the surface of silicon, energy, monetary cost and execution time. So the time aspect of the market must be considered. In this section, we illustrate the approach we have followed for the implementation of JPEG through our methodology. As we have previously presented the most important part of the chain compression and DCT part, it has a lot of calculating. In this case we will implement this part with SystemC and the rest of the chain compression is implemented on MATLAB.

The following attempts to give a guide for the implementation of the JPEG compression algorithm in Figure 8.

Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm 133

**Figure 8.** Implementing the JPEG algorithm.

132 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 2

**4.5. Decomposition and implementation of the JPEG algorithm** 

characters are not coded to a fixed number of bits.

suffering the most intense in the software calculation.

algorithm in Figure 8.

frequently. It is very similar to Morse code, which assigned smaller pulse combinations to letters that occurred more frequently. Huffman coding is variable length coding, where

This is the last step in the encoding process. It organizes the data stream into a smaller number of output data packets by assigning unique codewords that later during decompression can be reconstructed without loss. For the JPEG process, each combination of run length and size category, from the run length coder, are assigned a Huffman codeword.

It is possible to increase speed and to reduce power consumption by running portions of the algorithm implemented in the custom hardware. To do this, parts of the algorithm remains the SW and the other part goes to HW area and must be well chosen. This is called hardware partitioning software (HW / SW partitioning). Many factors must be considered when the HW / SW partitioning is done. The problem is to use the right amount of material. Using too much

The first step in a HW / SW partitioning is to identify the parts of the algorithm that consumes a lot of time if left in the software or by the implementation of the algorithm entirely in software or perform estimates on the number of cycles. The next step is to evaluate and decide which parts need to be moved to the HW area. It is important to take into account more things than just a party that consumes more cycles of the software. Perhaps it is better to leave this part of computation in software intensive and move some other parts in HW, the parts that are better suited for hardware implementation. This is of course possible only if time constraints may even now be

To make a good HW / SW partitioning a simulation tool is needed where much can be moved from HW field to SW field and vice versa. In addition, it should be possible to specify the execution time for different parts. This part of the design process is important and time spent here is well spent and often reduces the work in phases. If the processor architecture also must be chosen in the design process, the problem becomes even more complicated. With a more powerful processor, it is probably possible to do more in software and thus reduce the cost of designing and manufacturing the hardware. The question then is of course how this affects the total cost. The entire HW / SW partitioning problem is an optimization problem where constraints are typical on the surface of silicon, energy, monetary cost and execution time. So the time aspect of the market must be considered. In this section, we illustrate the approach we have followed for the implementation of JPEG through our methodology. As we have previously presented the most important part of the chain compression and DCT part, it has a lot of calculating. In this case we will implement this part with SystemC and the rest of the chain compression is implemented on MATLAB.

The following attempts to give a guide for the implementation of the JPEG compression

material implies a rise in costs and probably increase the time of placing on the market.

Matlab let us to choose the video when we click on the video source. A window is opened where we specify the video place and its parameter as it is presented in figure 9.


**Figure 9.** Choosing the video.

A click on the Block Profession opens window. In this window, there are the parameters of this block as a number of input, in our case , we put 1, number of output, in our case, we put 2 and two parameters are block size and overlap. A click the open Subsystem opens another window opens in which we find the block that we have just parameterize as indicated in figure 10.


Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm 135

It has two FIFO channels, one for receiving data and the other for sending data. From the SystemC code, we remove all SystemC dependent statements and exchange the FIFO

To proceed to an FPGA implementation, the resulting netlist from the previous stage has to be mapped to the FGPA's logic block structure and interconnect. The main outcome of this technology mapping, placing, and routing is a bit stream which can be programmed into a

**Figure 12.** The DCT in SystemC.

**Figure 13.** Two FIFO channels.

FPGA figure 13.

read/write.

**Figure 10.** Parameter of Block Processing.

Figure 11 below, shows the different parts of the implementation of the JPEG encoder.

**Figure 11.** Implementing the JPEG algorithm.

As motion in the chair, the DCT is the most important and contains much of calculation. This part of the chain will be developed in SystemC, and represents the Hardware part. We explain it using an example process named 'DCT' (in JPEG encoder) in SystemC as shown in Figure 12.

**Figure 12.** The DCT in SystemC.

Figure 11 below, shows the different parts of the implementation of the JPEG encoder.

As motion in the chair, the DCT is the most important and contains much of calculation. This part of the chain will be developed in SystemC, and represents the Hardware part. We explain it using an example process named 'DCT' (in JPEG encoder) in SystemC as shown in

**Figure 10.** Parameter of Block Processing.

**Figure 11.** Implementing the JPEG algorithm.

Figure 12.

It has two FIFO channels, one for receiving data and the other for sending data. From the SystemC code, we remove all SystemC dependent statements and exchange the FIFO read/write.

**Figure 13.** Two FIFO channels.

To proceed to an FPGA implementation, the resulting netlist from the previous stage has to be mapped to the FGPA's logic block structure and interconnect. The main outcome of this technology mapping, placing, and routing is a bit stream which can be programmed into a FPGA figure 13.

## **4.6. Results**

The virtual architecture model is described using SystemC language and is generated according to the parameters specified in the initial Simulink model. SystemC allows modeling a system at different abstraction levels from functional to pin accurate register transfer level.

Matlab/SystemC for the New Co-Simulation Environment by JPEG Algorithm 137

have shown synchronization overhead of less than 30 % in simulation time [9]. In the [5] A Software-Defined Radio (SDR) is a combination of digital filters, analog components and processors, each requiring different design approaches with a different tool or language. Using a traditional design flow, where the verification effort represents 70% of the total design time, will yield in more time spent on test-bench development and simulation runs. The result is 192 days as the total development time for this project, compared to 131 days using the improved design flow. This represents a productivity gain of around 32% over a traditional design flow that has limited test-bench components reuse and software interoperability. But the implementation HW/SW reduced the number of clock cycle: 1334722 to 158044 times of execution. The reduction on the total execution time of the JPEG

In this chapter, we presented a new approach based on the integration systemC in matlab / simulink. The capital advantage of this approach is the possibility of modeling and verifying the overall system within the same design environment. The result is shorter design cycles for applications using heterogeneous architectures. The co-simulation interface we presented a method for reducing the time spent on validation and verification while improving overall test-bench quality. MATLAB/Simulink assists the SystemC verification environment in a unified approach. It has been shown that the methodology allows complex stimulus generation and exhaustive data analysis for the design under verification. As FPGA designs encompass larger and larger systems, the need to efficiently model the complex external environment during the architecture and verification phases becomes greater. The whole verification flow has been evaluated, using an example. It has been shown, that the usage of the extended verification flow saves a significant amount of time during the development process. The proposed platform is tested on the JPEG compression algorithm. The execution time of such algorithm is improved by 88.15% due to the hardware implementation of the Matlab mult16 Function using SystemC. As future works, we aim to test our platform with the whole video compression chain using MPEG4 modules and Software-Defined Radio (SDR). It includes hardware and software components that require

algorithm was 88. 15%.

rigorous verification all along the design flow.

*Laboratory CES, National School of Engineers of Sfax, Tunisia* 

Computing and FPGAs (ReConFig 2005)

Walid Hassairi, Moncef Bousselmi, Mohamed Abid and Carlos Valderrama *UMons University of Mons, Electronics & Microelectronics Dpt., Mons, Belgium* 

[1] A. Avila, "*Hardware/Software Implementation of a Discrete Cosine Transform Algorithm Using SystemC*" Proceedings of the 2005 International Conference on Reconfigurable

**5. Conclusion** 

**Author details** 

**6. References** 

The virtual architecture is modeled using transaction level modeling (TLM) techniques that allow analyzing FPGA architecture in an earlier phase of design, software development and timing estimation. At the virtual architecture level, the Simulink functions of the application are transformed into systemC program code for each task. This step is very similar to the code generation performed by Real Time Workshop (RTW).

Contrary to the RTW which generates only single task code, the software at the virtual architecture level represents a multitasking systemC code description of the initial Simulink application model. The generation has to support also user defined systemC codes integrated in the Simulink model as S-functions. For the S-functions, the task code represents a function call of the user written systemC function. The semantics of the argument passing are identical to those of the definition in the configuration panel of the S-Function Builder tool in Simulink. The hardware is refined to a set of abstract SystemC modules (SC\_MODULE) for each subsystem. The SC\_MODULE of the processor includes the tasks modules that are mapped on the processor and the communication channels for the intra-subsystem communication between the tasks inside the same processor. The communication channels between the tasks mapped on the FPGA is implemented using standard SystemC channels. The tasks modules are implemented as SystemC modules (SC\_MODULE). The development of the JPEG Decoder application in Simulink requires 7 S-Functions in order to integrate the systemC code of the main parts of the decoding algorithm. Which are: jpeg\_sfun\_h, dct\_sfun\_h, sfc\_sf.h, sfc\_mex.h, sfcdebug.h, jpeg\_sfun.mexw32, dct\_sfun.mexw32.

Once this link is established, it opens up a wide range of additional capability to SystemC, like stimulus generation and data visualization. The first advantage of our technique is to use the right tool for the right task. Complex stimulus generation and signal processing visualization are carried out with MATLAB and Simulink while hardware verification is performed with SystemC verification standard. The second advantage is to have a SystemC centric approach allowing greater flexibility and configurability.

In this part, we make a comparison between the previous methodology based on the communication and the synchronization between both simulators and the new approach which is based on the integration of systemC in matlab / Simulink in other applications.

CODIS (COntinuous DIscrete Simulation) is a tool which can automatically produces cosimulation instances for continuous/discrete systems simulation using SystemC and Simulink simulators. This is done by generating and providing co-simulation interfaces and the co-simulation bus. To evaluate the performances of simulation models generated in CODIS, they measured the overhead given by the simulation interfaces. The experiments have shown synchronization overhead of less than 30 % in simulation time [9]. In the [5] A Software-Defined Radio (SDR) is a combination of digital filters, analog components and processors, each requiring different design approaches with a different tool or language. Using a traditional design flow, where the verification effort represents 70% of the total design time, will yield in more time spent on test-bench development and simulation runs. The result is 192 days as the total development time for this project, compared to 131 days using the improved design flow. This represents a productivity gain of around 32% over a traditional design flow that has limited test-bench components reuse and software interoperability. But the implementation HW/SW reduced the number of clock cycle: 1334722 to 158044 times of execution. The reduction on the total execution time of the JPEG algorithm was 88. 15%.

## **5. Conclusion**

136 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 2

code generation performed by Real Time Workshop (RTW).

centric approach allowing greater flexibility and configurability.

jpeg\_sfun.mexw32, dct\_sfun.mexw32.

The virtual architecture model is described using SystemC language and is generated according to the parameters specified in the initial Simulink model. SystemC allows modeling a system at different abstraction levels from functional to pin accurate register

The virtual architecture is modeled using transaction level modeling (TLM) techniques that allow analyzing FPGA architecture in an earlier phase of design, software development and timing estimation. At the virtual architecture level, the Simulink functions of the application are transformed into systemC program code for each task. This step is very similar to the

Contrary to the RTW which generates only single task code, the software at the virtual architecture level represents a multitasking systemC code description of the initial Simulink application model. The generation has to support also user defined systemC codes integrated in the Simulink model as S-functions. For the S-functions, the task code represents a function call of the user written systemC function. The semantics of the argument passing are identical to those of the definition in the configuration panel of the S-Function Builder tool in Simulink. The hardware is refined to a set of abstract SystemC modules (SC\_MODULE) for each subsystem. The SC\_MODULE of the processor includes the tasks modules that are mapped on the processor and the communication channels for the intra-subsystem communication between the tasks inside the same processor. The communication channels between the tasks mapped on the FPGA is implemented using standard SystemC channels. The tasks modules are implemented as SystemC modules (SC\_MODULE). The development of the JPEG Decoder application in Simulink requires 7 S-Functions in order to integrate the systemC code of the main parts of the decoding algorithm. Which are: jpeg\_sfun\_h, dct\_sfun\_h, sfc\_sf.h, sfc\_mex.h, sfcdebug.h,

Once this link is established, it opens up a wide range of additional capability to SystemC, like stimulus generation and data visualization. The first advantage of our technique is to use the right tool for the right task. Complex stimulus generation and signal processing visualization are carried out with MATLAB and Simulink while hardware verification is performed with SystemC verification standard. The second advantage is to have a SystemC

In this part, we make a comparison between the previous methodology based on the communication and the synchronization between both simulators and the new approach which is based on the integration of systemC in matlab / Simulink in other applications.

CODIS (COntinuous DIscrete Simulation) is a tool which can automatically produces cosimulation instances for continuous/discrete systems simulation using SystemC and Simulink simulators. This is done by generating and providing co-simulation interfaces and the co-simulation bus. To evaluate the performances of simulation models generated in CODIS, they measured the overhead given by the simulation interfaces. The experiments

**4.6. Results** 

transfer level.

In this chapter, we presented a new approach based on the integration systemC in matlab / simulink. The capital advantage of this approach is the possibility of modeling and verifying the overall system within the same design environment. The result is shorter design cycles for applications using heterogeneous architectures. The co-simulation interface we presented a method for reducing the time spent on validation and verification while improving overall test-bench quality. MATLAB/Simulink assists the SystemC verification environment in a unified approach. It has been shown that the methodology allows complex stimulus generation and exhaustive data analysis for the design under verification. As FPGA designs encompass larger and larger systems, the need to efficiently model the complex external environment during the architecture and verification phases becomes greater. The whole verification flow has been evaluated, using an example. It has been shown, that the usage of the extended verification flow saves a significant amount of time during the development process. The proposed platform is tested on the JPEG compression algorithm. The execution time of such algorithm is improved by 88.15% due to the hardware implementation of the Matlab mult16 Function using SystemC. As future works, we aim to test our platform with the whole video compression chain using MPEG4 modules and Software-Defined Radio (SDR). It includes hardware and software components that require rigorous verification all along the design flow.

## **Author details**

Walid Hassairi, Moncef Bousselmi, Mohamed Abid and Carlos Valderrama *UMons University of Mons, Electronics & Microelectronics Dpt., Mons, Belgium Laboratory CES, National School of Engineers of Sfax, Tunisia* 

## **6. References**

[1] A. Avila, "*Hardware/Software Implementation of a Discrete Cosine Transform Algorithm Using SystemC*" Proceedings of the 2005 International Conference on Reconfigurable Computing and FPGAs (ReConFig 2005)

	- [2] M.Abid, A. Changuel, A. Jerraya," *Exploration of Hardware/Software Design Space through a Codesign of Robot Arm Controller*" EURO-DAC '96 with EURO-VHDL '96 pp 17-24

**Chapter 7** 

© 2012 Lamchich and Lachguer, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2012 Lamchich and Lachguer, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

**Matlab Simulink as Simulation Tool** 

**for Wind Generation Systems Based** 

In the last years, Matlab-Simulink has become the most used software for modeling and simulation of dynamic systems. It provides a powerful graphical interface for building and verifying new mathematical models as well as new control strategies particularly for non linear systems. Then, using a dSPACE prototype, these new control strategies can be easily

The study of wind turbine systems generators are an example of such dynamic systems, containing subsystems with different ranges of the time constants: wind, turbine, generator,

There are two principle-connections of wind energy conversion. The first one is connecting the wind-generator to grid at grid frequency. While connected to grid, grid supplies the reactive VAR required for the induction machines. Often, a DC-link is required to interface the wind-generator system with a certain control technique to the utility grid. The second is

A wound rotor induction machine, used as a Doubly Fed Induction Generator (DFIG) wind turbines are nowadays becoming more widely used in wind power generation. The DFIG connected with back to back converter at the rotor terminals provide a very economic solution for variable speed application. Three-phase alternative supply is fed directly to the stator in order to reduce the cost instead of feeding through converter and inverter. For the

The network side converter control has been achieved using Field Oriented Control (FOC). This method involves the transformation of the currents into a synchronously rotating dq

connecting the wind-generator system to isolated load in remote areas.

control of these converters different techniques will be adopted.

reference frame that is aligned with one of the fluxes.

**on Doubly Fed Induction Machines** 

Moulay Tahar Lamchich and Nora Lachguer

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/48774

**1. Introduction** 

implemented and tested.

power electronics, transformer and grid.


**Chapter 7** 
