2.1. Principle of the proposed method

An implementation of automatic MFR method of the detected signals, at high data rates is proposed. We consider the recognition of 10 Gbps NRZ-OOK, 40 Gbps NRZ-DQPSK, 100 Gbps NRZ-DP-QPSK, 160 Gbps DP-16QAM and 1 Tbps WDM-Nyquist NRZ-DP-QPSK. The basis of this technique is the use of ANN-based pattern recognition trained by the features of linear optical sampling. The method is validated in the presence of various link impairments including CD, DGD and ASE noise. Thereby, the ANN concept with the principle of asynchronous sampling is described in the following sections.

### 2.1.1. ANN architecture

The ANN is a computational tool trained by the use of input-output data to generate a desired mapping from an input stimulus to the targeted output. The architecture of an ANN consists of three layers: input, hidden and output layers, also called as multilayer perceptron 3 (MLP3), as shown in Figure 1. The role of the input layer is to pass the input vector to the network, without computational role. In addition, the ANN architecture has one or more hidden layers and finally an output layer [1]. Layer of processing elements gives independent computations of received data. Then, it passes the result to another layer. In turn, the next layer also passes on the result to another layer after making it independent computations. At the end, the output of the network is determined by a subgroup of one or more processing elements. Each processing element makes its computation based upon a weighted sum of its inputs.

As shown in Figure 1, it is used for MFR by assigning output nodes to represent each format type. In our case, to recognize these modulation formats, five output nodes such as 10 Gbps NRZ-OOK, 40 Gbps NRZ-DQPSK, 100 Gbps NRZ-DP-QPSK, 160 Gbps DP-16QAM and 1 Tbps WDM-Nyquist NRZ-DP-QPSK are required. In the training data, the target output vectors ti, (i = 1, …, m) can be considered as binary vectors with elements with values of "1" indicating the correct modulation formats and elements with values of "0" indicating the incorrect formats. m is the number of modulation formats to be recognized (m = 5). In this way, the target vectors of Modulation Format Recognition Using Artificial Neural Networks for the Next Generation Optical Networks http://dx.doi.org/10.5772/intechopen.70954 13

presence of different impairments, such as chromatic dispersion (CD), differential group delay

In the second method, we propose a novel technique of MFR algorithms using the time-frequency analysis, which is wavelet transform. In conjunction with ANN pattern recognition algorithm, this method is efficient for features extraction when it approximate both the signal envelop and frequency content. Continuous wavelet transform (CWT) is used to extract the classification features of 40 Gbps NRZ-OOK, and used three multi-carriers modulation formats namely 160 Gbps OFDM DP-16QAM, 400 Gbps Dual-Carrier (DC)-Polarization Division Multiplexed (PDM)-QPSK and 1 Tbps WDM-Nyquist NRZ-DP-QPSK. Through simulations, the proposed technique is able to classify these modulation schemes under different transmission impairments

An implementation of automatic MFR method of the detected signals, at high data rates is proposed. We consider the recognition of 10 Gbps NRZ-OOK, 40 Gbps NRZ-DQPSK, 100 Gbps NRZ-DP-QPSK, 160 Gbps DP-16QAM and 1 Tbps WDM-Nyquist NRZ-DP-QPSK. The basis of this technique is the use of ANN-based pattern recognition trained by the features of linear optical sampling. The method is validated in the presence of various link impairments including CD, DGD and ASE noise. Thereby, the ANN concept with the principle of asynchro-

The ANN is a computational tool trained by the use of input-output data to generate a desired mapping from an input stimulus to the targeted output. The architecture of an ANN consists of three layers: input, hidden and output layers, also called as multilayer perceptron 3 (MLP3), as shown in Figure 1. The role of the input layer is to pass the input vector to the network, without computational role. In addition, the ANN architecture has one or more hidden layers and finally an output layer [1]. Layer of processing elements gives independent computations of received data. Then, it passes the result to another layer. In turn, the next layer also passes on the result to another layer after making it independent computations. At the end, the output of the network is determined by a subgroup of one or more processing elements. Each

processing element makes its computation based upon a weighted sum of its inputs.

As shown in Figure 1, it is used for MFR by assigning output nodes to represent each format type. In our case, to recognize these modulation formats, five output nodes such as 10 Gbps NRZ-OOK, 40 Gbps NRZ-DQPSK, 100 Gbps NRZ-DP-QPSK, 160 Gbps DP-16QAM and 1 Tbps WDM-Nyquist NRZ-DP-QPSK are required. In the training data, the target output vectors ti, (i = 1, …, m) can be considered as binary vectors with elements with values of "1" indicating the correct modulation formats and elements with values of "0" indicating the incorrect formats. m is the number of modulation formats to be recognized (m = 5). In this way, the target vectors of

(DGD) and amplified spontaneous emission (ASE) noise.

12 Advanced Applications for Artificial Neural Networks

2. MFR based on ANN trained by LOS

nous sampling is described in the following sections.

2.1. Principle of the proposed method

with high accuracy.

2.1.1. ANN architecture

Figure 1. MLP3-ANN structure with amplitude histograms bins vector as input and identified modulation format as output.

these five used modulation formats would be represented by [1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 1, 0] and [0, 0, 0, 0, 1], respectively. The posterior probability is considered at the output of the multilayer perceptron. Hence, the final recognition goes to the node with the highest value argmax(y<sup>i</sup> ). Taken an example of output vector with elements [0.05, 0.01, 0.03, 0.9, 0.01], the most probable identification would be 160 Gbps DP-16QAM format.

Amplitude histograms are represented at the input of the ANN with back propagation (BP) learning method. The basic processing elements of the ANN, called neurons, of neighboring layers are interconnected by varying the coefficients that represent the strengths or weights of the respective connections. Each neuron is computed as the weighted sum of the input signals Xi, (i = 1, …, m) transformed by the transfer function, as shown in Figure 2. The learning capability of an artificial neuron is achieved by adjusting the weights Wki, (i = 1, …, m) in accordance with the chosen learning algorithm. Weights of the perceptron can amplify or reduce the original input signal. Adding the weighted signals before passing into the activation function is essential to convert the input into a more useful output (Yk). Different types of activation function exist but one of the simplest would be step function.

Figure 2. McCulloch-Pitts computational model of a neuron.

The architecture of ANN for recognition problems requires some guidelines. More neurons need more computation and they have a tendency to overfit the data when the number is set too high, which justifies the choice of MLP3 architecture.
