**5.7 Systems security engineering**

Systems security engineering is defined in MIL-HDBK-1785 as "an element of system engineering that applies scientific and engineering principles to identify security vulnerabilities and minimize or contain risks associated with these vulnerabilities. It uses mathematical, physical, and related scientific disciplines, and the principles and methods of engineering design and analysis to specify, predict, and evaluate the vulnerability of the system to security threats."

Building more secure systems (i.e., security assurance in information technology systems) calls for the following requirements:


Improving the security of information systems often relies on red teaming activities (Clem et al., 2007) because they enhance knowledge in the following ways:


Example red team activities (Clem et al., 2007) and their objectives include the following:


The systems security engineering community needs to move toward designing for dependability, in addition to ensuring the other system "ilities" (e.g., affordability, availability, extensibility, flexibility, and scalability). Users of systems want the ability to trust that the systems (and services) will be consistently good in quality and performance.

means of implementing countermeasures and mitigations, and vastly improved instrumentation to know the state of risk, the state of counters for such risks, and the

Systems security engineering is defined in MIL-HDBK-1785 as "an element of system engineering that applies scientific and engineering principles to identify security vulnerabilities and minimize or contain risks associated with these vulnerabilities. It uses mathematical, physical, and related scientific disciplines, and the principles and methods of engineering design and analysis to specify, predict, and evaluate the vulnerability of the

Building more secure systems (i.e., security assurance in information technology systems)

Appropriate metrics for product/system testing, evaluation, and assessment, and

Understanding adversaries and operational environments (assessing threats);

Improving the security of information systems often relies on red teaming activities (Clem et

Anticipating program risk, identifying security assumptions, and supporting security

Exploring and developing security options, policy, process, procedures, and impacts;

Example red team activities (Clem et al., 2007) and their objectives include the following:

*Design assurance red teaming* – Helps ensure that a system will achieve its mission in

*Operational red teaming* – Helps to train staff, conduct testing and evaluation, validate

*Penetration testing* – Helps to determine what access or control an insider, an outsider, or

The systems security engineering community needs to move toward designing for dependability, in addition to ensuring the other system "ilities" (e.g., affordability, availability, extensibility, flexibility, and scalability). Users of systems want the ability to trust that the systems (and services) will be consistently good in quality and performance.

Comprehensive system security planning and lifecycle management.

al., 2007) because they enhance knowledge in the following ways:

Identifying and describing consequential program security;

Identifying and describing surprise, unintended consequences.

Exploring security of future concepts of operation; and

concepts of operations, and identify vulnerabilities.

an outsider working with an insider may obtain.

 Identifying and describing consequential security design alternatives; Measuring security progress and establishing security baselines;

invocation of countermeasures and mitigations on-demand.

**5.7 Systems security engineering** 

calls for the following requirements:

Well-designed component products,

degraded modes of operation.

Well-defined system-level security requirements,

 Sound systems security engineering practices, Competent systems security engineers,

system to security threats."

decisions;

Trustworthy and reliable is the definition of dependable. Thus, securely engineering systems need to move in the direction of being dependable.

The authors posit that dependability is one of the missing specifications that enable the systems security engineering community to be more effective through the lifecycle of a system's development and to maintain effectiveness during operations and maintenance of a system. Without a specification of necessary and sufficient trustworthiness in the context of a system's use, it is extremely difficult to provide the arguments and demonstrations of the worth of security measures. They are often then subordinated in the engineering trade-space to other systems engineering factors (the other "ilities").

Dependability may serve as the property that combines security engineering with other engineering disciplines that lead to security being "built-in" versus "strapped-on." Security cannot be an afterthought. It must be built into cost, schedule, and performance. Not all risks are equal and not all users/consumers have equal tolerance for risks. Dependability establishes the value of security measures in a way that they legitimately become part of the engineering trade-space. It places a value on security measures that enable a value of worth during operations and use and serves as a metric for how well risks have been managed. Dependability may be the basis for integrating traditional "engineering of" systems (the partitioning with a defined system boundary and application of engineering disciplines) with the larger context of "engineering for" systems (the inclusion of the system environment that leads to the specification and definition of a system boundary).

#### **5.8 Engineering systems**

"Engineering of" systems requires a holistic perspective that treats the operating environment of the engineering of a system concept, design, development, implementation, and support as more than an assumed and invariant actor that must merely be characterized and exploited by the system to be engineered. The operating environment "as a system" can be conceived, designed, developed, implemented, and supported to attain an advantage or benefit or present a risk. This is what distinguishes the "engineering of" versus the "engineering for" a system.

Systems engineering is defined as an interdisciplinary approach to enable the realization of successful systems. It focuses on defining user needs and required functionality early in the development cycle, documenting requirements, and then continuing with the design synthesis and system validation while bearing in mind the whole problem: operations, cost and schedule, performance, training and support, test, manufacturing, and disposal. Systems engineering includes both the business and technical needs of all users with the objective of creating a quality product that meets user needs. Systems engineering can fit within the overall engineering systems field. For example, systems engineering views the enterprise as a consideration or major influence on the system whereas engineering systems includes the enterprise as an essential part of the system.

Engineering systems is an emerging interdisciplinary academic and research field focused on addressing large-scale, complex engineering challenges with their social-political context. It takes an integrative holistic view of large-scale, complex, technologically-enabled systems with significant enterprise level interactions and socio-technical interfaces (Rhoades & Hastings, 2004). It may include components from several engineering disciplines, as well as

Challenges in Building Trusted Information Systems 107

S0

OE OE

S'i

A high level of confidence is needed in trusted information systems. There is a current void in active risk management research. Active risk management for building trusted

 Understanding mission tolerance to failures of integrity, confidentiality, and availability of information throughout the life-cycle of a product and the processes producing and

Understanding system criticality and priority tolerance to risks to focus resources on

 Understanding dependence on critical subcomponents and designing and instrumenting for robustness of risk management during mission operations and

Understanding supply chain for critical components and procuring within mission risk

 Partnering with industry to drive security (manufacturing, engineering, test and evaluation, etc.) into the processes and sub-suppliers at every production and support

 Understanding that privacy is a trust threshold only enabled in part by mechanisms of security, the association of that trust threshold with a security capability, and the ability

Fig. 6. Risk Managed Systems Framework

information systems requires the following activities:

appropriate and adequate countermeasures and mitigations;

**6. Conclusion** 

maintaining it;

sustainment;

tolerance;

tier; and

economics, public policy, and other sciences. It is suggested that the four underlying disciplines for engineering systems are:


Engineering trusted information systems requires active risk management. The definitions presented in Section 4 often lead to assessment and analysis results that fall short of what is required to fully manage risks; in particular, risks that originate in the operating environment (OE) of the system under analysis. Recent research has addressed a portion of this shortfall under the rubric of "systems-of-systems" considerations. However, much of this research simply alters the system boundary and applies systems knowledge and technique to a collection of interacting systems. Generally, this approach is distinguished by a deliberate and explicit focus on the interconnects among systems and how to influence or modulate each individual system to gain a better understanding of how the collective whole behaves as a "single system." Although valuable and important to the understanding of complex systems, emergent properties and behaviors of an interacting whole, and the relevance and significance of the linkages among systems as a system of its own, this approach does little to advance the study of managing risks.

Uncertainty is a fundamental source of risk. Managing uncertainty is the difficulty that hinders the successful management of risks. Uncertainty arises in the environment and propagates into a system or system-of-systems and back into the environment. These interconnections are shown in Figure 6. The carrier of uncertainty is information in information systems. This suggests that the information environment acts as an autonomous system, and is a third party actor for consideration in the management of risk. This third party originates uncertainties and has casual impact on both the structural interconnection among systems and the flow variables (information elements) that interact among the interconnected systems, and within the individual systems. The operating environment becomes a critical system when risk management is the objective. It may not be assumed away, avoided or ignored. This model of interacting systems of individual vulnerabilities, threats, intents, capabilities, and risks must be adequately characterized, modelled, analyzed, and evaluated as a whole. A risk event may originate anywhere, be propagated anywhere, be realized and have impact distant from its provenance, and be countered or mitigated anywhere. This is the dynamic that defines supply chain networks and the information risks they present.

Modern dependency on ICT and the information operating environments creates challenges for today's system engineer in building trusted information systems. System complexity, coupled with the global sourcing of components and services, presents uncertainty in both the supplied items and the ways and means of producing the supplied items. The opportunities for "insider behavior" implemented not by humans but by the machinery of the human-machine system should alter the focus of R&D, the types and nature of countermeasures and mitigations implemented, and most certainly the tools and techniques of design, engineering, and test and evaluation, and operational monitoring.

economics, public policy, and other sciences. It is suggested that the four underlying

Engineering trusted information systems requires active risk management. The definitions presented in Section 4 often lead to assessment and analysis results that fall short of what is required to fully manage risks; in particular, risks that originate in the operating environment (OE) of the system under analysis. Recent research has addressed a portion of this shortfall under the rubric of "systems-of-systems" considerations. However, much of this research simply alters the system boundary and applies systems knowledge and technique to a collection of interacting systems. Generally, this approach is distinguished by a deliberate and explicit focus on the interconnects among systems and how to influence or modulate each individual system to gain a better understanding of how the collective whole behaves as a "single system." Although valuable and important to the understanding of complex systems, emergent properties and behaviors of an interacting whole, and the relevance and significance of the linkages among systems as a system of its own, this

Uncertainty is a fundamental source of risk. Managing uncertainty is the difficulty that hinders the successful management of risks. Uncertainty arises in the environment and propagates into a system or system-of-systems and back into the environment. These interconnections are shown in Figure 6. The carrier of uncertainty is information in information systems. This suggests that the information environment acts as an autonomous system, and is a third party actor for consideration in the management of risk. This third party originates uncertainties and has casual impact on both the structural interconnection among systems and the flow variables (information elements) that interact among the interconnected systems, and within the individual systems. The operating environment becomes a critical system when risk management is the objective. It may not be assumed away, avoided or ignored. This model of interacting systems of individual vulnerabilities, threats, intents, capabilities, and risks must be adequately characterized, modelled, analyzed, and evaluated as a whole. A risk event may originate anywhere, be propagated anywhere, be realized and have impact distant from its provenance, and be countered or mitigated anywhere. This is the dynamic that defines supply chain networks and the

Modern dependency on ICT and the information operating environments creates challenges for today's system engineer in building trusted information systems. System complexity, coupled with the global sourcing of components and services, presents uncertainty in both the supplied items and the ways and means of producing the supplied items. The opportunities for "insider behavior" implemented not by humans but by the machinery of the human-machine system should alter the focus of R&D, the types and nature of countermeasures and mitigations implemented, and most certainly the tools and techniques

of design, engineering, and test and evaluation, and operational monitoring.

Systems architecture / systems engineering and product development;

disciplines for engineering systems are:

Engineering management; and

Technology and policy

information risks they present.

Operations research and systems analysis;

approach does little to advance the study of managing risks.

Fig. 6. Risk Managed Systems Framework
