**1.1 Information content and analysis**

Video data provides the capability to analyze temporal events which enables far deeper analysis than is possible with still imagery. At the primitive level, analysis of still imagery depends on the static detection, recognition, and characterization of objects, such as people or vehicles. By adding the temporal dimension, video data reveals information about the movement of objects, including changes in pose and position and changes in the spatial

Quantifying Interpretability Loss due to Image Compression 37

sensor parameters and image acquisition conditions have been developed empirically and substantially increase the utility of NIIRS (Leachtenauer *et al.* 1997; Leachtenauer and

The NIIRS provides a common framework for discussing the interpretability, or information potential, of imagery. NIIRS serves as a standardized indicator of image interpretability within the community. An image quality equation (IQE) offers a method for predicting the NIIRS of an image based on sensor characteristics and the image acquisition conditions (Leachtenauer *et* 

*al.* 1997; Leachtenauer and Driggers 2001). Together, the NIIRS and IQE are useful for:

Measuring the performance of sensor systems and imagery exploitation devices.

The foundation for the NIIRS is that trained analysts have consistent and repeatable perceptions about the interpretability of imagery. If more challenging tasks can be performed with a given image, then the image is deemed to be of higher interpretability. A set of standard image exploitation tasks or "criteria" defines the levels of the scale. To illustrate, consider Fig. 2. Several standard NIIRS tasks for visible imagery appear at the right. Note that the tasks for levels 5, 6, and 7 can be performed, but the level 8 task cannot. The grill detailing and/or license plate on the sedan are not evident. Thus, an analyst would

Recent studies have extended the NIIRS concept to motion imagery (video). In exploring avenues for the development of a NIIRS-like metric for motion imagery, a clearer understanding of the factors that affect the perceived quality of motion imagery was needed (Irvine *et al.* 2006a; Young *et al.* 2010b). Several studies explored specific aspects of this

Assisting in the design and assessment of future imaging systems, and

Communicating the relative usefulness of the imagery,

Managing the tasking and collection of imagery,

Documenting requirements for imagery,

assign a NIIRS level 7 to this image.

Fig. 2. Illustration of NIIRS for a still image

Driggers 2001).

configuration of objects. This additional information can support the recognition of basic activities, associations among objects, and analysis of complex behavior (Fig. 1).

Fig. 1 is a hierarchy for target recognition information complexity. Each box's color indicates the ability of the developer community to assess the performance and provide confidence measures. The first two boxes on the left exploit information in the sensor phenomenology domain. The right two boxes exploit extracted features derived from the sensor data.

To illustrate the concept, consider a security application with a surveillance camera overlooking a bank parking lot. If the bank is robbed, a camera that collects still images might acquire an image depicting the robbers exiting the building and show several cars in the parking lot. The perpetrators have been detected but additional information is limited. A video camera might collect a clip showing these people entering a specific vehicle for their getaway. Now both the perpetrators and the vehicle have been identified because the activity (a getaway) was observed. If the same vehicle is detecting on other security cameras throughout the city, analysis of multiple videos could reveal the pattern of movement and suggest the location for the robbers' base of operations. In this way, an association is formed between the event and specific locations, namely the bank and the robbers' hideout. If the same perpetrators were observed over several bank robberies, one could discern their pattern of behavior, i.e. their *modus operandi*. This information could enable law enforcement to anticipate future events and respond appropriately (Gualdi *et al.* 2008; Porter *et al.* 2010).

Fig. 1. Image Exploitation and Analysis
