**8. References**

Aaron, A., Rui, Z. & Girod, B. (2002). Wyner-Ziv coding of motion video. In: Asilomar Conference on Signals, Systems and Computers, pp. 240-244.

Finally, to analyze the global transcoding improvement, Tables 4, 5, 6 and 7 summarize global transcoding performance. In this case, Bjøntegaard and Sullivan´s common test rule (Sullivan et al., 2001) was not used because it is a recommendation only for H.264/AVC. Then, to estimate the PSNR obtained by the transcoder, the original sequences were compared with the output sequences after transcoding. For each four QP points, the PSNR measured is displayed as an average (Δܴܲܵܰ തതതതതതതത)). To estimate the BR generated by the reference and the proposed transcoder, the BR generated by both stages (DVC decoding and H.264/AVC encoding) was added. Then equation (1) was applied and it was averaged for each four H.264/AVC QPs (Δܤܴതതതത). As the DVC decoding contributes with most of the bitrate, results are very similar to those in Tables 1. In order to evaluate the TR, total transcoding time was measured for the reference and proposed transcoder. Then Equation 5 was applied and a mean was calculated for each of the four H.264/AVC QPs (ܴܶതതതത). As DVC decoding takes up most of the transcoding time, improvements in this stage have a bigger influence on the overall transcoding time, and so the TR obtained is similar to that in Table 1, reducing the complexity of the transcoding process by up to 73% (on

In this chapter it is analyzed the transcoding framework for video communications between mobile devices. In addition, it is proposed a WZ to H.264/AVC transcoder designed to support mobile-to-mobile video communications. Since the transcoder device accumulates the highest complexity from both video coders, reducing the time spent in this process is an important goal. With this aim, in this chapter two approaches are proposed to speed-up WZ decoding and H.264/AVC encoding. The first stage is improved by using parallelization techniques as long as the second stage is accelerated by reusing information generated during the first stage. As a result, with both approaches a time reduction of up to 73% is achieved for the complete transcoding process with negligible RD losses. In addition, the presented transcoder performs a mapping for different GOP patterns and lengths between the two paradigms by using an adaptive algorithm, which takes into account the MVs

This work was supported by the Spanish MICINN, Consolider Programme and Plan E funds, as well as European Commission FEDER funds, under Grants CSD2006-00046 and TIN2009-14475-C04-03. It was also supported by JCCM funds under grant PEII09-0037-2328 and PII2I09-0045-9916, and the University of Castilla-La Mancha under Project AT20101802. The work presented was performed by using the VISNET2-WZ-IST software developed in

Aaron, A., Rui, Z. & Girod, B. (2002). Wyner-Ziv coding of motion video. In: Asilomar

Conference on Signals, Systems and Computers, pp. 240-244.

average).

**6. Conclusions** 

**7. Acknowledgements** 

**8. References** 

the framework of the VISNET II project.

gathered in the side information generation process.


**1. Introduction** 

compression.

**1.1 Information content and analysis** 

techniques. The paper examines three specific issues:

**3** 

*2Scientist, USA* 

**Quantifying Interpretability Loss** 

Video imagery provides a rich source of information for a range of applications including military missions, security, and law enforcement. Because video imagery captures events over time, it can be used to monitor or detect activities through observation by a user or through automated processing. Inherent in these applications is the assumption that the image quality of the video data will be sufficient to perform the required tasks. However, the large volume of data produced by video sensors often requires data reduction through video compression, frame rate decimation, or cropping the field-of-view as methods for reducing data storage and transmission requirements. This paper presents methods for analyzing and quantifying the information loss arising from various video compression

 **Measurement of image quality**: Building on methods employed for still imagery, we present a method for measuring video quality with respect to performance of relevant analysis tasks. We present the findings from a series of perception experiments and user

 **User-based assessments of quality loss**: The design, analysis, and findings from a userbased assessment of image compression are presented. The study considers several compression methods and compression rates for both inter- and intra-frame

 **Objective measures of image compression**: The final topic is a study of video compression using objective image metrics. The findings of this analysis are compared to the user evaluation to characterize the relationship between the two and indicate a method for performing future studies using the objective measures of video quality.

Video data provides the capability to analyze temporal events which enables far deeper analysis than is possible with still imagery. At the primitive level, analysis of still imagery depends on the static detection, recognition, and characterization of objects, such as people or vehicles. By adding the temporal dimension, video data reveals information about the movement of objects, including changes in pose and position and changes in the spatial

studies which form the basis for a quantitative measure of video quality.

**due to Image Compression** 

John M. Irvine1 and Steven A. Israel2

*1Draper Laboratory, Cambridge, MA,* 

Sullivan G. & Bjøntegaard G. (2001). Recommended Simulation Common Conditions for H.26L Coding Efficiency Experiments on Low-Resolution Progressive-Scan Source Material. ITU-T VCEG, Doc. VCEG-N81.
