**3. Modeling techniques**

#### **3.1 Analytical model**

Analytical model has fast computation and the Green's function is applied for modeling the diffusion equation or RTE analysis. The Green's function provides a solution when the source is a spatial and temporal function. It is commonly used to solve the forward problem for image reconstruction, specifically for fast imaging techniques. Optical properties are modeled by a green's function [3] for a slab representing the homogeneous background; with an additional perturbation term represent the spherical insertion.

#### **3.2 Statistical model**

Models the individual photon with Poisson error incorporated in the model. Monte Carlo method is a gold standard statistical technique in diffuse optics. The geometry of the model is defined by μa, μs, the refractive index and the photon trajectories. Light propagation in non- diffusive domains is calculated by Monte Carlo techniques. Random walk theory provides a distinct approach in which photon transport is modeled as a series of steps on the discrete cubic lattice. Random walk theory [9] is particularly suited to model time-domain measurements. The random walk extension technique has been developed for modeling media with anisotropic optical properties, maintaining the cubic lattice.

#### **3.3 Numerical model**

Numerical techniques have the potential for modeling complex geometries. Finite Element Method (FEM) [8] is used to represent the inhomogeneous distribution [12] of optical properties in an arbitrary geometry. Boundary Element

Method (BEM) [3], Finite Difference Method (FDM) and Finite Volume Method (FVM) are applied in more specialized applications. Finite Element Method divides the reconstruction domain into finite element meshes. The optimal computational efficiency of FEM depends on the smallest number of elements to represent the internal field by a finite element mesh. Adaptively refine the mesh by placing more elements when the field changes rapidly.

#### **4. Regularization**

The ill-condition inverse problem [5] in image reconstruction provides poor localization of imaging in localized or sparse regions. To overcome the ill posed problem in inverse model, regularization is applied in inverse model. The various forms of regularization are standard/Tikhonov regularization, exponential/spatial regularization, generalized least mean square regularization, adaptive regularization and model-based regularization [8, 11, 13–15].

#### **4.1 Standard regularization**

Standard regularization [16] or constant regularization is of Tikhonov type. Here the regularization is based on the already available information that may be the noise characteristics [11] or structural information [17] of the data, more prior information [13] usage leads to a better outcome of reconstruction procedure or robustness to the noise in the data.

$$P(\mu\_a) = \lambda\_T \|\mu\_a\|^2 \tag{6}$$

λ is a regularization parameter (i.e.) constant chosen to stabilize the solution and its value varies from 1e-6 to 10.

$$
\lambda\_T = \left(\sigma\_\circ\right)^2 / \left(\sigma\_{\mu-\mu\_0}\right)^2 \tag{7}
$$

The ill posed problem with inverse model is solved by adding the penalty term to the objective function.

$$\mathcal{Q} = \left\| y - G(\mu\_a) \right\|^2 + P(\mu\_a) \tag{8}$$

y = ln (A) is the measured experimental data here A specifies the amplitude, G (μa) modeled data and penalty term is P(μa) removes the high frequency components. Iteratively linearization minimizes "Ω" by *<sup>∂</sup><sup>Ω</sup> <sup>∂</sup>μ<sup>a</sup>* ¼ 0*:* Using Taylor series of expansion Tikhonov minimization is obtained by

$$\mathcal{Q} = \left\| y - G(\mu\_a) \right\|^2 + \lambda\_T \left\| L(\mu\_a - \mu\_0) \right\|^2 \tag{9}$$

L is the dimensionless matrix and μ<sup>0</sup> is prior estimate. The penalty term minimization scheme along with linearization leads to the updated equation (Gauss-Newton update equation)

$$\left[\mathbf{J}^T \mathbf{J} + \lambda \mathbf{I}\right] \nabla \mu\_a = \mathbf{J}(\mathbf{y} - \mathbf{G}(\mu\_a)) \tag{10}$$

'J' is a Jacobian ∂G (μa) / ∂ (μa) gives the rate of change with modeled data with respect to μ<sup>a</sup> and I represent the Identity matrix. The diffuse optical tomography

*Diffuse Optical Tomography System in Soft Tissue Tumor Detection DOI: http://dx.doi.org/10.5772/intechopen.98708*

inverse problem sets a least square problem, which is solved by matching experimentally measured boundary data with modeled data iteratively.

Linearization of the as in (7) leads to an updated equation.

$$\left[\mathbf{J}^{T}\mathbf{J} - \lambda\mathbf{L}^{T}\mathbf{L}\right] = \mathbf{J}^{T}\delta\_{i-1} - \lambda\_{T}\mathbf{L}^{T}L(\mu\_{i-1} - \mu\_{0}) \tag{11}$$

The *δi*�<sup>1</sup> represents data misfit model and T for transpose operation. The resolution provided by as in (9) concentrates more on the detected position.

#### **4.2 Adaptive regularization**

In adaptive regularization [15] the regularization parameter λ varies with respect to the projection error [18]. Projection error Φ is defined as the difference in measured data in the modeled data which is expressed as in (12).

$$\nabla \rho = |\mathcal{y} - G(\mu\_a)|^2 \tag{12}$$

The regularization parameter λ is denoted as

$$
\lambda = \mathbf{1}/\left(\mathbf{2} + e^{-\Delta\phi}\right) \tag{13}
$$

The regularization parameter λ varies in the range from 1/3 to 1/2. As in (13) 'e' represents the exponential function and ΔΦ representing the change in projection error. A penalty term for projection error-based regularization is expressed as

$$P(\mu\_a) = \lambda(\Delta \rho) \left\| \mu\_a \right\|^2 \tag{14}$$

Linearization leads to an updated equation.

$$
\Delta \mu = \int ^T \left[ \dot{J} \boldsymbol{J}^T + \lambda \boldsymbol{J} \boldsymbol{J}^T \boldsymbol{I} \right]^{-1} \boldsymbol{\varrho} \tag{15}
$$

As in Eq. (15) Δμ represents the change in absorption coefficient. Projection error determines the accuracy, while JJ<sup>T</sup> is denoted as the Hessian matrix with diagonal elements.

#### **4.3 Exponential regularization**

Exponential regularization is otherwise called as spatially varying regularization [14] or wavelength chromophore specific regularization, which is based on the physics of the problem. This simplicity makes it widely used for solving inverse problems especially in the cases where the prior information is not available. λ(r) is spatially varying regularization parameter, where r represents the position spatially. The spatial variation is attained by an exponential function in the form

$$
\lambda(r) = \lambda\_\epsilon \exp\left(r/R\right)\lambda\_\epsilon \tag{16}
$$

As in (16) the radius of imaging domain is R, λc, λ<sup>e</sup> are the regularization parameters at the edge and center of the location. The spatially varying regularization has exponential term with low value at the center and large value near the boundary of the imaging domain in-order to neutralize the hypersensitivity near the boundary, which appears due to detectors located at the boundary. In order to determine the regularization parameter λ(r), the generalized objective function is given as,

$$\mathcal{Q} = \left\| y - G(\mu) \right\|^2 + \lambda(r) \left\| L(\mu - \mu\_0) \right\|^2 \tag{17}$$

As in (17) L is a dimensionless regularization matrix and μ<sup>0</sup> is the prior estimate the of properties, while the penalty term for exponential regularization is represented as

$$P(\mu\_a) = \lambda(r) \|\mu\_a\|\text{\textquotedbl{}}\tag{18}$$

Linearizing (17) leads to a Jacobian updated equation as

$$\left[\boldsymbol{I}^{T}\boldsymbol{I} - \lambda(\boldsymbol{r})\boldsymbol{L}^{T}\boldsymbol{L}\right] = \boldsymbol{I}^{T}\delta\_{i-1} - \lambda(\boldsymbol{r})\boldsymbol{L}^{T}\boldsymbol{L}(\mu\_{i-1} - \mu\_{0})\tag{19}$$

Exponential Regularization captures the hessian matrix diagonal as J<sup>T</sup> J.

#### **4.4 Model based regularization**

Model based regularization utilizes the combination of model resolution matrix [19] and data resolution matrix. The objective is to match the modeled data with the observed data. By this method of regularization, the spatial resolution of the reconstructed image is improved [18].

$$
\mu = G(\mu\_{\mathfrak{a}}) \tag{20}
$$

Expanding using Taylor series gives the equation

$$y = \mathcal{G}(\mu\_a) = \mathcal{G}(\mu\_{a0}) + \mathcal{G}'(\mu\_a)(\mu\_a - \mu\_{a0}) + (\mu\_a - \mu\_{a0})^T \mathcal{G}^"(\mu\_a)(\mu\_a - \mu\_{a0}) + \dots \tag{21}$$

Jacobian matrix J = G<sup>0</sup> (μa) and Hessian matrix H = G″ (μa). Linearizing (21) then

$$\mathcal{Y} = \mathcal{G}(\mu\_{a0}) + \mathcal{J}(\mu\_a - \mu\_{a0}) \tag{22}$$

using y - G (μa0) = δ and Δ*μ*<sup>a</sup> = μa- μa0 [20].

$$\text{Update equation } \delta = I \Delta \tilde{\mu}\_a \tag{23}$$

Change in absorption coefficient (Δμa) is derived as in (18) as

$$
\Delta \mu\_a = \left[ \mathbf{J}^T \mathbf{J} + \lambda \mathbf{I} \right]^{-1} \mathbf{J}^T \mathbf{J} \Delta \tilde{\mu}\_a \tag{24}
$$

In the case of λ = 0

$$
\Delta \mu\_a = \Delta \tilde{\mu}\_a \tag{25}
$$

Regularization term is linearized using the model resolution matrix, which depends on the forward model and regularization but not on data. Because of the ill posed nature of the problem as in (25) λ > 0, then

$$
\Delta\mu\_a \neq \Delta\tilde{\mu}\_a \tag{26}
$$

As in Eq. (24) leads to a model resolution matrix.

*Diffuse Optical Tomography System in Soft Tissue Tumor Detection DOI: http://dx.doi.org/10.5772/intechopen.98708*

$$\mathbf{M} = \left[\mathbf{J}^T \mathbf{J} + \lambda \mathbf{I}\right]^{-1} \mathbf{J}^T \mathbf{J} \tag{27}$$

As in (27) M has the dimension of NN x NN and it purely depends on J<sup>T</sup> J and the regularization term used. Linearization of as in (27) leads to an updated Jacobian matrix. λ varies from 0 to 1.

$$\mathbf{J}\left[\mathbf{J}^{\mathrm{T}}\mathbf{J} + \mathbf{c}\lambda\_{\mathrm{i}}\mathbf{I}\right]\Delta\mu\_{a} = \mathbf{J}^{\mathrm{T}}\left(\mathbf{y} - \mathbf{G}(\mu\_{a})\right) \tag{28}$$

The regularization parameter of a model resolution matrix is given as

$$
\lambda\_{im} = \mathbf{M}\_{ii} / \max\left(\mathbf{M}\_{ii}\right) \text{ for } \mathbf{i} = \mathbf{1}, \mathbf{2}, \dots, \mathbf{NN} \tag{29}
$$

The model resolution matrix can be applied for deriving the linearization as in (26) for both constant and spatially varying regularization parameters. The matrix varies for constant and spatially varying regularization. The model resolution matrix main aim is to provide the better resolution characteristics without depending on data.

The data resolution matrix concentrates only on the data not on the image characteristics [21]. It defines that how well the estimated Δμa fits the observed data, hence it is important to consider data too in order to improve the resolution characteristics.

$$\mathbf{J}\Delta\mu\_{\rm at} = \boldsymbol{\delta}^{\prime} \tag{30}$$

Data resolution matrix is derived using the Jacobian matrix (J) and the regularization technique which is used for reconstruction. It is evaluated by matching the predicted data with the obtained data [22].

$$
\delta' = \mathcal{y} - G(\mu\_{a0}) \tag{31}
$$

The data-resolution matrix does not depend on a specific data (y) or error in it but are exclusively the properties of J and the regularization (λ) used. The closer it is to the identity matrix, the smaller are the prediction errors for δ, where δ` as in (31) representing the data misfit.

Data resolution matrix D is given as

$$D = f^T f \left[ \mathbf{J}^T + \lambda \mathbf{I} \right]^{-1} \tag{32}$$

Linearizing (31) leads to an updated equation

$$
\Delta \mu\_i = \mathbf{J}^T \left[ \mathbf{J} \mathbf{J}^T + \lambda \mathbf{I} \right]^{-1} \delta\_{i-1} \tag{33}
$$

The regularization parameter of a data resolution matrix is given as

$$\lambda\_{\rm id} = D\_{\rm ii} / \max\left(D\_{\rm ii}\right) \text{ for } \mathbf{i} = \mathbf{1}, \mathbf{2}, \dots, \mathbf{NN} \tag{34}$$

As in (26) and as in (34) the regularization parameter of the model-based regularization λ<sup>i</sup> is given as

$$
\lambda\_i = (\lambda\_{im} + \lambda\_{id}) / 2 \tag{35}
$$

Penalty term for the regularization scheme is given as

$$P(\mu\_a) = c\lambda\_i \left\| \mu\_a \right\|^2 \tag{36}$$

Where c provides the weight for penalty term and it is a constant term.

## **5. Inverse model**

Newton - Raphson iterative method to find the optical parameter μa, μ<sup>s</sup> by solving the minimum objective function.

$$\psi(\mu) = |\varrho^m - \varrho^\varepsilon|^2 + \lambda |\mu - \mu\_0|^2 \tag{37}$$

Φ<sup>m</sup> and Φ<sup>c</sup> are calculated and measured radiance at the detectors. λ is regularization parameter, μ, μ<sup>0</sup> are current and initial estimates of optical properties at each node.

The initial values of absorption and scattering properties were estimated homogenously [23]. Update Φoptical distribution in Tikhonov Regularization is given by Eq. (39).

$$\mu = \boldsymbol{J}^{T} \left( \boldsymbol{J} \boldsymbol{J}^{T} + \lambda \boldsymbol{H}\_{\text{max}} \boldsymbol{I} \right)^{-1} (\boldsymbol{\rho}\_{m} - \boldsymbol{\rho}\_{c}) \tag{38}$$

Δμ Optical parameter update vector, Hmax maximum main diagonal element value of the matrix JJ<sup>T</sup> . J is Jacobian matrix for inverse problem plots the variation in log amplitude and phase for both absorption and diffusion modification in every node.

#### **5.1 Jacobian reduction**

Jacobian matrix J has the size as number of measurements NM by the number of FEM nodes NN i.e. NM x NN is calculated using ad joint method. Limit the Jacobian [23, 24] to the measured amplitude data and optical absorption. Jacobian links a change in log amplitude, at the boundary with a change in absorption coefficient μa.

$$J = \begin{bmatrix} \frac{\partial \ln I\_1}{\partial \mu\_{d1}} & \cdots & \frac{\partial \ln I 1}{\partial \mu\_{dNN}} \\\\ \vdots & \ddots & \vdots \\\\ \frac{\partial \ln I\_{NM}}{\partial \mu\_{d1}} & \cdots & \frac{\partial \ln I\_1}{\partial \mu\_{dNN}} \end{bmatrix} \tag{39}$$

The size of the Jacobian matrix is reduced by calculating the total sensitivity throughout the imaging domain and a new Jacobian *Ji*~*j* is formed [25].

$$\tilde{J}\_{\vec{ij}} = \begin{cases} J\_{\vec{ij}} \text{if } \sum\_{i=1}^{\text{NM}} I\_{\vec{ij}} \ge threshold \\\\ \mathbf{0} \text{if } \sum\_{i=1}^{\text{NM}} I\_{\vec{ij}} < threshold \end{cases} \tag{40}$$

'j' corresponds to a node number within the domain. Reduction of Jacobian matrix improves the computational speed and efficiency of image reconstruction.

### **5.2 Bayesian framework**

Ill posed condition of DOT problem, the solution is robust. To overcome this problem a priori information is incorporated constraint in the space of unknowns. Bayesian approach proposes an algorithm for spatial physiological prior [26]. High resolution anatomical image is segmented into sub-images. Each image is assigned a mean value with a prior probability density function of the image. 'Confidence level' is defined in the form of an image variance formulation to allow local variations within sub-images. MAP (Maximum a posteriori) estimates of the image [26, 27] is formed based on the formulation of the image's probability density function.

$$
\hat{y}\_{\text{MAP}} = \arg\max \{ \log p(y/\varkappa) + \log p(\varkappa) \}\tag{41}
$$

p(y/x) is log likelihood function; p(x) is a probability density function.

Alternating minimization algorithm sequentially updates the unknown parameters, solves the optimization problem. Probability density function of the ith sub-image is defined in the spatial prior as

$$p(\mathbf{x}\_i/\sigma\_i) = \frac{1}{\left(2\prod \sigma\_i^2\right)^{N\_i}/2} \exp\left(\frac{-\mathbf{1}}{2\sigma\_i^2} \left|\mathbf{X}\_i - \mathbf{C}\_i\right|^2\right), i = \mathbf{1}, 2...M\tag{42}$$

M is number of sub regions, Ni is number of voxels in the ith sub image, xi is the unknown sub image, Ci is chromosphere mean concentration, σ<sup>2</sup> <sup>i</sup> is single variance.

The confidence level is incorporated into the statistical reconstruction procedure, the sub-image variance.

$$p(\sigma\_i) = \frac{1}{\left(2\prod \gamma\_i^2\right)^{N\_i/2}} \exp\left(-\frac{1}{2\gamma\_i^2} \left|\sigma\_i - \sigma\_j\right|^2\right), i = 1, 2, \dots M \tag{43}$$

γ<sup>i</sup> is the variance and σ<sup>i</sup><sup>0</sup> is the mean value of σi*:*

#### **6. Experimental set-up**

The practical setup of image acquisition as shown in **Figure 2** includes optical components, electrical components, control, data acquisition and image reconstruction [28].

The optical Multiplexer has three parts namely the motor, drive and Black box (PMT) Photon multiplier tube. Driver rotates the optical multiplexer to guide the energy to PMT, which converts light to electrical signals. The signal is amplified by an Amplifier and preprocessed electrical signals are given to Data acquisition card. Data acquisition software samples the raw data, post process and controls the hardware. The personal computer delivers commands to alter the fiber switch (source channel) sequentially. 16 X 16 input and output fibers constitutes to 256 sources – detector pairs. Image is reconstructed using inverse modeling such as Jacobian reduction with FEM.

The optical Multiplexer has three parts namely the motor, drive and Black box (PMT) Photon multiplier tube. Driver rotates the optical multiplexer to guide the

**Figure 2.** *The practical imaging system.*

energy to PMT, which converts light to electrical signals. The signal is amplified by an Amplifier and preprocessed electrical signals are given to Data acquisition card. Data acquisition software samples the raw data, post process and controls the hardware. The personal computer delivers commands to alter the fiber switch (source channel) sequentially. 16 X 16 input and output fibers constitutes to 256 sources – detector pairs. Image is reconstructed using inverse modeling such as Jacobian reduction with FEM.
