Multi-Dimensional Codebooks for Multiple Access Schemes

*Kais Hassan, Kosai Raoof and Pascal Chargé*

### **Abstract**

The sparse code multiple access (SCMA) scheme directly maps the incoming bits of several sources (users/streams) to complex multi-dimensional codewords selected from a specific predefined sparse codebook set. The codewords of all sources are then superimposed and exchanged. The shaping gain of the multi-dimensional constellation of SCMA leads to a better system performance. The decoder's objective will be to separate the superimposed sparse codewords. Most existing works on SCMA decoders employ message passing algorithm (MPA) or one of its variations, or a combination of MPA and other methods. The system architecture is highlighted and its basic principles are presented. Then, an overview of main multi-dimensional constellations for SCMA systems will be provided. Afterwards, we will focus on how the SCMA codebooks are decoded and how their performance is evaluated and compared.

**Keywords:** multi-dimensional constellations, codebook design, message passing algorithms, sparse code, code-domain

### **1. Introduction**

The massive connectivity is one of the main requirements of the 5G telecommunication systems and beyond. One key to fulfill this objective is to allow several users to efficiently access the same resources (frequency band for example) simultaneously, this approach is called multiple access. Based on how the resources are shared among multiple users, two types of multiple access could be distinguished: orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA) [1].

A well-known OMA scheme is code-division multiple access (CDMA), the idea is to divide the symbol duration into a number of time slots or chips such that the spreading sequence associated to each user is chosen from a set of non sparse quasiorthogonal ones. The overall transmitted sequence is the result of the superimposition of the symbols of all users which are spread over different chips. The number of served users is limited to the number of available quasi-orthogonal sequences, however, the orthogonality of sequences guarantees the simplicity of the receiver since a low complexity correlation operation is sufficient to detect the users'symbols despite the inter-sequence interference.

The key difference between code-domain NOMA techniques and the CDMA is that the spreading sequences of the former are restricted to non-orthogonal low crosscorrelation sparse sequences such that more spreading sequences of code can be used, and consequently more users can be served simultaneously. One code-domain NOMA scheme which had shown to achieve a promising link level performance is SCMA [2]. Traditional code-domain schemes map bits to a symbol which is selected from onedimensional constellation before spreading this symbol over a given low-density spreading sequence, SCMA combines together the two steps which gives birth to the idea of multi-dimensional constellations. The capacity to directly map the bits to some sparse SCMA codewords belonging to multi-dimensional codebooks attracts a lot of attention.

In this Chapter, we will present the SCMA system architecture by presenting its basic principles and its signal model. Then, existing methods for SCMA codebook design will be reviewed. Finally, we will explain how SCMA signal can be detected at the receiver either using the traditional MPA or one among its variations.

### **2. SCMA system architecture**

In the following subsections, multi-dimensional coding principles are presented before illustrating why SCMA can be employed to provide multiple access.

#### **2.1 Basic principles of multi-dimensional constellations**

The SCMA spreads its sequence in the frequency domain over *K* subcarriers, these narrow frequency bands are also called resource elements (REs). For an uplink scenario, a base station (BS) serves simultaneously *J* separate users. The user *j*, so-called also layer *<sup>j</sup>*, sends a *<sup>K</sup>* � dimensional codeword, **<sup>x</sup>**ð Þ *<sup>m</sup> <sup>j</sup>* , which represents log <sup>2</sup> *Mj* data bits. Consequently, **x**ð Þ *<sup>m</sup> <sup>j</sup>* must be chosen from a codebook, *Cj*, of size *Mj* such that the multi-dimensional constellation, <sup>C</sup> <sup>¼</sup> *<sup>C</sup>j*, 1≤*j*≤*<sup>J</sup>* is designed to facilitate the multiple access. Actually, the codewords of all users, **x**ð Þ *<sup>m</sup> <sup>j</sup>* , 1≤*j*≤*J*, are superimposed and exchanged over the *K* REs. In fact, C collects the signatures of served users. In order to increase the number of connected users, the codewords are designed to be sparse, i.e. all their entries are zeros except for few ones, in other words, the number of non-zero entries, *Nj*, must be lesser than the length of the codewords, *K*, i.e. *Nj* ≪ *K*. Hence, the *j* th SCMA layer can be described by its *codebook sparsity degree*, *Nj*, and the whole SCMA system is characterized by


However, we must highlight that all the *Nj* non-zero entries of the codewords of *C<sup>j</sup>* are located in the same positions.

Based on the different parameters of SCMA system, especially, the size of codebook of each user, *Mj*, and its codebook sparsity degree *Nj*, we can distinguish two kinds of SCMA system architectures:


**Figure 1** presents an example of a regular SCMA system, obviously, we have *Nj* ¼ 2, 1≤*j*≤*J* and *Mj* ¼ 4, 1≤*j*≤*J*. This system is characterized by *df* ¼ 3 and *λ* ¼ 150%. In the rest of this chapter, a simple"SCMA" will refer implicitly to regular SCMA.

The received vector for an uplink SCMA system is given by,

$$\mathbf{y} = \sum\_{j=1}^{J} \mathbf{H}\_{\mathbf{f}} \mathbf{x}\_{j}^{(m)} + \mathbf{n},\tag{1}$$

where **y** ¼ *y*1, ⋯, *yK* � �*<sup>T</sup>* and **x**ð Þ *<sup>m</sup> <sup>j</sup>* <sup>¼</sup> *<sup>x</sup>*ð Þ *<sup>m</sup> <sup>j</sup>*,1 , <sup>⋯</sup>, *<sup>x</sup>*ð Þ *<sup>m</sup> j*,*K* � �*<sup>T</sup>* . Let us denote the channel gain of user *j* on subcarrier *k* by *hj*,*<sup>k</sup>*, hence the matrix **H***<sup>j</sup>* is diagonal of dimension *K* � *K* where *hj*,*<sup>k</sup>*, 1≤*k*≤ *K* are its diagonal entries. Finally, at the receiver, a zero-mean white circularly complex Gaussian noise, **n**, with variance *N*<sup>0</sup> is added; i.e. **n** � CN ð Þ 0, *N*0**I***<sup>K</sup>* , where **I***<sup>K</sup>* is the identity matrix of size *K*.

### **3. SCMA codebook design**

The design of SCMA codebook is usually based on several steps, a description of each one among them is given in this section. The idea is that the constellation function, associated with each user *j* generates a constellation set with *M* alphabets of length *<sup>N</sup>*. Then, the mapping matrix **<sup>V</sup>***<sup>j</sup>* maps the *<sup>N</sup>*‐dimensional constellation points to SCMA codewords to form the codebook *Cj*.

#### **Figure 1.**

*The encoder of a regular SCMA system: The transmitted codeword is the superposition of the codeword of each user which is selected from its own codebook according to the* log <sup>2</sup>ð Þ *M bit to be transmitted at each time frame.*

#### **3.1 Codebook design procedure**

The description of a SCMA system begins by determining the locations of non-zero elements of user *<sup>j</sup>*, 1≤*j*≤*J*, via the vector **<sup>f</sup>**j, for instance **<sup>f</sup>**<sup>j</sup> <sup>¼</sup> ½ � 1,1,0,0 *<sup>T</sup>* means that the user *j* employs the first two subcarriers only to send his data, this can be also described using another matrix, **V***j*, of dimension *K* � *N*, given by,

$$\mathbf{V}\_{j} = \begin{bmatrix} \mathbf{1} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{bmatrix} \tag{2}$$

where **V***<sup>j</sup>* is the mapping matrix of user *j*. Thus, the whole SCMA system is described by gathering the **f**<sup>j</sup> vectors in one matrix **F** of dimension *K* � *J* such that **F** ¼ **f** 1, ⋯,**f**<sup>J</sup> � �, **F** is called the factor graph matrix. The two matrices are related by **<sup>f</sup>***<sup>j</sup>* <sup>¼</sup> **<sup>V</sup>***j***V***<sup>T</sup> j* .

The factor graph matrix that represents the system in **Figure 1** is given by,

$$\mathbf{F} = \begin{bmatrix} \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{1} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{1} \\ \mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{1} & \mathbf{1} \end{bmatrix} \tag{3}$$

and the factor graph itself is depicted in **Figure 2** where every circle represents a user (so-called variable node) and every block represents a subcarrier (so-called function node).

Thus, the matrix **F** is related to the codeword **x**ð Þ *<sup>m</sup> <sup>j</sup>* , in Eq. (1), by the fact that the structure of **F** defines where zeros are located in the codebook from which the codeword **x**ð Þ *<sup>m</sup> <sup>j</sup>* is selected.

#### **Figure 2.**

*The matrix,* **F***, can be translated into a factor graph. The encoder of the SCMA system illustrated in Figure 1 is represented by this factor graph.*

As to the SCMA codebook design, it is considered as a joint optimization problem which objective is to find both the optimum user-to-RE mapping matrices V <sup>∗</sup> and the optimum multi-dimensional constellation C <sup>∗</sup> , hence, this problem can be defined as,

$$\mathcal{V}^\*, \mathcal{C}^\* = \arg\max\_{\mathcal{V}, \mathcal{C}} D(\phi(\mathcal{V}, \mathcal{C}; J, M, N, K)) \tag{4}$$

where *D* is a design criterion and *ϕ* is the SCMA system as it was described above. However, the SCMA system must be designed under the assumption that *J* users are simultaneously connected, that is the system is fully loaded. In this case, the number of users is equal to the number of possible *N*-combinations among the *K* available REs,

i.e. *<sup>J</sup>* <sup>¼</sup> *<sup>K</sup> N* � �. Hence, there is only one possible optimal mapping matrix solution.

Finding the optimum multi-dimensional constellation is still complex, one way to simplify this optimization problem is to divide it into several subproblems [4]. Hence, the multi-stage design of SCMA codebook is conducted in three main steps:


Taking into consideration the above-mentioned remarks, namely the uniqueness of the optimal solution for the mapping matrix and the multi-stage solution for the multi-dimensional constellation design, Eq. (3) can be rewritten as,

$$\left\{\mathbf{T}\_{j}^{\*}\right\}, \mathbf{C}\_{mc}^{\*} = \arg\max\_{\left\{\mathbf{T}\_{j}\right\}, \mathbf{C}\_{mc}} D\left(\boldsymbol{\phi}\left(\boldsymbol{\mathcal{V}}^{\*}, \left\{\mathbf{T}\_{j}\mathbf{C}\_{mc}\right\}; \boldsymbol{J}, \boldsymbol{M}, \boldsymbol{N}, \boldsymbol{K}\right)\right) \tag{5}$$

such that the *j* th codebook is calculated by,

$$\mathbf{C}\_{\circ} = \mathbf{V}\_{\circ}^{\*} \mathbf{T}\_{\circ}^{\*} \mathbf{C}\_{mc}^{\*}. \tag{6}$$

In the following parts of this section, and inspired by the codebook design procedure illustrated in **Figure 3**, we present the major keys to design the mother constellation and the appropriate transformation operators.

#### **3.2 Mother constellation design**

The codebook of user *J* must be composed of *M* codewords since it encodes log <sup>2</sup>ð Þ *M* bits, each codeword has *N* non-zero elements. Hence, we start by designing a mother constellation matrix of *N* rows and *M* columns. Each row among the *N* ones represents a dimension among the *N* dimensions of the constellation. On the other hand, the *m*th column is a multi-dimensional point among the *M* multi-dimensional points of the constellation. The objective of the designing process is to guarantee a

#### **Figure 3.**

*A block diagram that illustrates the different steps which are conducted to design a SCMA codebook.*

sufficiently good distance among all the points in the set C, that is keeping the points of the multi-dimensional constellation sufficiently far from each others such that they can be separated and decoded at the receiver. Consequently, the mother constellation must own a good distance profile. However, this requires to define how the distance between two multi-dimensional points is measured, this fundamentally defines the criterion *D* in eq. (4). Hereafter, the interested reader can find a list of the most employed distance definitions in the state of the art.

#### *3.2.1 Euclidean distance*

The Euclidean distance between two constellation points, **x**ð Þ *<sup>u</sup> <sup>i</sup>* and **<sup>x</sup>**ð Þ *<sup>m</sup> <sup>j</sup>* , 1≤*u* ≤ *M*, 1≤ *m* ≤ *M*, of user *i* and *j* respectively, 1≤*i* ≤*J*, 1≤*j*≤ *J*, is calculated by,

$$\mathbf{d}\_E \left( \mathbf{x}\_j^{(m)}, \mathbf{x}\_i^{(u)} \right) = \| \mathbf{x}\_j^{(m)} - \mathbf{x}\_i^{(u)} \| \tag{7}$$

A classic design criterion is the minimum Euclidean distance of a multidimensional constellation [5, 6], it is defined as,

$$\mathbf{d}\_{E}^{(\min)} = \min\_{1 \le u, m \le M} \left\{ d\_{E} \left( \mathbf{x}\_{j}^{(m)}, \mathbf{x}\_{i}^{(u)} \right) \right\} \tag{8}$$
 
$$1 \le i\_{1} j \le j$$

This criterion is more useful for evaluating the design of *Cmc* when all users are observing the same fading channel coefficients over their REs.

#### *3.2.2 Euclidean kissing number*

The key here is to count the number of distinct constellation point pairs which are separated by an Euclidean distance which is equal to the minimum Euclidean distance between any two points of the multi-dimensional constellation.

### *3.2.3 Product distance*

The product distance between two *N*-dimensional complex constellation points, **x**ð Þ *<sup>m</sup> <sup>j</sup>* <sup>¼</sup> *<sup>x</sup>*ð Þ *<sup>m</sup> <sup>j</sup>*,1 , <sup>⋯</sup>, *<sup>x</sup>*ð Þ *<sup>m</sup> j*,*N* � �*<sup>T</sup>* and **x**ð Þ *<sup>u</sup> <sup>i</sup>* <sup>¼</sup> *<sup>x</sup>*ð Þ *<sup>u</sup> <sup>i</sup>*,1 , <sup>⋯</sup>, *<sup>x</sup>*ð Þ *<sup>u</sup> i*,*N* � �*<sup>T</sup>* , is expressed as, d*<sup>P</sup>* **x**ð Þ *<sup>m</sup> <sup>j</sup>* , **<sup>x</sup>**ð Þ *<sup>u</sup> i* � � <sup>¼</sup> <sup>Y</sup> 1≤*n*≤ *N x*ð Þ *<sup>m</sup> <sup>j</sup>*,*<sup>n</sup>* 6¼*x*ð Þ *<sup>u</sup>* ∣*x*ð Þ *<sup>m</sup> <sup>j</sup>*,*<sup>n</sup>* � *<sup>x</sup>*ð Þ *<sup>u</sup> <sup>i</sup>*,*<sup>n</sup>* ∣ (9)

The minimum product distance of a multi-dimensional constellation is given by,

*i*,*n*

$$\mathbf{d}\_{P}^{(\min)} = \min\_{\mathbf{1} \le u, m \le M} \left\{ d\_{P} \left( \mathbf{x}\_{j}^{(m)}, \mathbf{x}\_{i}^{(u)} \right) \right\} \tag{10}$$
 
$$\mathbf{1} \le i\_{1} \le j$$

This criterion is preferred when evaluating the design of *Cmc* in strong fading channel case, i.e., when channel coefficients over employed subcarriers are different.

#### *3.2.4 Product kissing number*

It is the number of distinct constellation point pairs with product distance equal to the minimum product distance.

To understand why SCMA performs well, the concept of shaping gain was introduced, the idea is to measure how the inherent shape of a multi-dimensional constellation, i.e. possessing additional dimensions in each constellation point or additional degrees of freedom, results in enhancing the distancing property of the SCMA constellation. We can assume that increasing the shaping gain means enhancing the overall system performance. The shaping gain is calculated by the ratio of the minimum distance between the points of multi-dimensional constellation to the minimum distance between the points of an one-dimensional one. The two constellations must have the same total power distributed on the same number of points. For instance, the authors in [5, 7] proposed 4-point two-dimensional mother constellation. The quadrature phase shift keying constellation is chosen as the reference one-dimensional constellation, the resulting shaping gain, which is calculated based on the Euclidean distance, is 1.25 dB.

Several new methods to design the SCMA mother constellation were proposed in the literature. In the following paragraphs, an overview of some interesting ones is presented, for each method the design criterion and the employed distance definition are highlighted.

The Euclidean distance will be intuitively the first to be used. One approach could be to fix a minimum Euclidean distance between any two points of the multidimensional constellation and to optimize another property, for instance the average constellation energy was minimized in [5], the resulting mother constellation is called the *M*-Beko. In [6], the authors proposed the *M*-Peng scheme which fixes the average energy and tries to maximize the minimum Euclidean distance between any two points of the alphabet.

Several research works aimed to reduce the number of superposing constellation points over each subcarrier or dimension as shown in **Figure 4**, however the users are

#### **Figure 4.**

*Low-projection constellation: An example of QAM SCMA constellation points of size M* ¼ 4 *with two non-zero REs, labeled based on gray coding. First step rotates the constellations to ensure a maximum product distance between symbols which enhances the detection process. The second step could better reduce the complexity of the receiver since some constellation points collide over each RE, for instance the constellation points corresponding to* 00 *and* 11 *in the M-sized constellation collide over the first subcarrier, however, they have maximum distance over the second one which makes them separable using Mp-QAM constellation while Mp* ≤ *M.*

distinct on other dimensions which will allow us to efficiently decode the codewords of each one among them [8–11]. This type of mother constellation is described as a low-projection one since it virtually reduces the codebook size from *M* into *Mp* where *Mp* is the size of the low-projected constellation. This leads to a further complexity reduction since the later is directly related to the *effective* codebook size, for instance,

we can reduce the MPA complexity to *Mdf <sup>p</sup>* instead of *Mdf* . The low-projection approach is generally associated to the *product distance* criterion which has to be carefully adjusted to enhance the performance in the low signal-to-noise ratio (SNR) zone without compromising the performance in the high SNR one.

The design of constellations or code dictionaries is well studied in the state-of-theart, we can mention, for example, digital modulation, CDMA, channel and source coding. This rich literature inspired some designs of multi-dimensional constellations. For instance, the authors in [9] proposed the T *M* QAM SCMA codebook whose design is based on the quadrature amplitude modulation (QAM). The idea is to design first two *N*-dimensional real constellations, then the *N*-dimensional complex constellation is conceived by applying a shuffling method on the Cartesian product of these *N*-dimensional real points. The optimization process is concluded by a rotation operation which aims at maximizing the minimum product distance of multi-dimensional constellation. The *M* LQAM scheme in [10] is a hybrid one between the shuffling method and the low-projection constellation approach. All the above-presented mother constellation designs did not take into consideration the wireless channel characteristics. For instance, the research work in [12, 13] derived a design criterion from cutoff rate of MIMO systems when the channel is assumed to be Rayleigh fading, the conceived constellation for SCMA systems is called *M*-Bao. In fact, the *M* multidimensional points of the T *M* QAM, *M* LQAM and *M*-Bao are based on the *M* corners of a log <sup>2</sup>ð Þ *M* -dimensional hyper-cube. This inspired the authors in [14] to consider that the solution of the optimization problem in (4) is possible through an optimization of rotation angles of a hyper-cube, this method is denoted as *M* HQAM.

Analytical analysis showed that the complexity of MPA decoder is reduced from *Mdf* to log <sup>2</sup>ð Þ *<sup>M</sup>* � �*df* . Two examples of 2-dimensional mother constellations with 4 codewords are illustrated in **Table 1** and **Figure 5**, we hope that this will help the reader to understand the structure of a mother constellation.

Most of the above multi-dimensional constellations assume that the complex symbols can be randomly selected. Some propositions try to relax constraints on the research space by placing the constellation points of each dimension on multi-radius concentric rings [10, 15–17]. In [10], the symbols of each low-projection complex dimension are selected to form a *M*-point circular constellation, the *M* CQAM is based on the signal space diversity for MIMO systems over Rayleigh fading channels and results in a complexity reduction from *Mdf* to ð Þ *<sup>M</sup>* � <sup>1</sup> *df* . The star-QAM constellation was proposed for digital modulation with the aim of being capable of flexibly adapting the ratios of multi-radius concentric rings. This approach was extended to multidimensional SCMA codebook design [15, 16]. The idea is to construct the first dimension of mother constellation from a star-QAM constellation of size *M*, afterwards, the following dimensions are deduced by applying some operations, for instance scaling and permuting, on the first dimension. The parameters of these operations are calculated through computer search which opens the door to designing constellations with large size and/or high dimension. An example of a constellation designed with this approach is represented in **Figure 6**. In [16], it was proposed to evaluate their proposition by directly applying the design criterion on the generated codewords of all users, contrary to other methods where it is only the mother constellation which was evaluated. The applied optimization criterion is the pairwise error probability between any two transmitted codewords **x**ð Þ<sup>1</sup> , **x**ð Þ<sup>2</sup> which is given by,

$$\mathbb{P}\left(\mathbf{x}^{(1)},\mathbf{x}^{(2)}|\mathbf{H}\right) = Q\left(\sqrt{\frac{\left\|\mathbf{H}(\mathbf{x}^{(1)} - \mathbf{x}^{(2)})\right\|^2}{2N\_0}}\right). \tag{11}$$

where **H** is the uplink channel matrix of SCMA system as defined in Eq. (1).

In [17], each dimension of the mother constellation belongs to a ring such that the *M* complex points forms a uniformly spaced phase shift keying (PSK) constellation. Several dimensions mean several PSK rings with different radius values, hence the


#### **Table 1.**

*This table presents T4QAM [9] and 4LQAM [10] mother constellations (4-codewords with 2 non-zeros dimensions) where x*ð Þ *<sup>m</sup> <sup>n</sup> belongs to dimension n, i.e. x*ð Þ *<sup>m</sup> <sup>n</sup> is the n*th *entry of the m*th *codeword m.*

#### **Figure 5.**

*Two examples of 2-dimensional mother constellations, namely T4QAM [9] and 4LQAM [10], each one is composed of 4-codewords: (a) the one-dimensional constellation of T4QAM as projected on the first dimension, (b) the one-dimensional constellation of T4QAM as projected on the second dimension, (c) the one-dimensional constellation of 4LQAM as projected on the first dimension and (d) the one-dimensional constellation of 4LQAM as projected on the second dimension.*

multi-dimensional constellation is an amplitude and phase shift keying (APSK) constellation. In some applications, the PSK rings outperform square shaped QAM constellation since they provide a limited peak of power. The authors proposed a multistage optimization, the coded modulation capacity is employed as a design criterion for the first dimension of the mother constellation, then the other ones are optimized using permutations.

Once the mother constellation is designed, optimized and evaluated based on one of the above-presented design criteria, the applied transformations, which are used to generate the *J* codebooks, must be designed to preserve the characteristics of the mother constellation.

#### **3.3 Transformation operators design**

The design procedure of SCMA codebooks was introduced in **Figure 3**. First, the *N M* mother constellation, *Cmc*, is designed, then the sparse codebook of SCMA user *j* is constructed by applying a set of operators, **T***j*, on *Cmc*, and a mapping matrix, **V***j*, as seen in eq. (5). Transforming a complex constellation can be conducted based on typical operations such as complex conjugate, rotation operator, interleaving and vector permutation. Several operators can be combined in some cases. Hence, the transformation operators must be chosen carefully such that the good characteristics of the mother constellation are conserved, their design was recently investigated [16–18, 20].

*Multi-Dimensional Codebooks for Multiple Access Schemes DOI: http://dx.doi.org/10.5772/intechopen.110032*

**Figure 6.**

*A SCMA codebook of size M* ¼ 4 *and sparsity degree N* ¼ 2 *can be designed, for instance, based on a four-rings star-QAM mother constellation. Here, α and β are 2 reel design parameters.*

The Euclidean distance is widely employed as a design criteria of the mother constellation. However, preserving its distancing property is possible by applying unitary rotation matrices [5–9, 19]. In this case, it is possible to merge the mapping operation and the transformation one to mold a new transformed factor graph matrix, **F***T*. An example of a transformed factor graph matrix is expressed as,

$$\mathbf{F}\_T = \begin{bmatrix} \mathbf{0} & \varrho\_1 & \varrho\_2 & \mathbf{0} & \varrho\_3 & \mathbf{0} \\ \varrho\_2 & \mathbf{0} & \varrho\_3 & \mathbf{0} & \mathbf{0} & \varrho\_1 \\ \mathbf{0} & \varrho\_2 & \mathbf{0} & \varrho\_1 & \mathbf{0} & \varrho\_3 \\ \varrho\_1 & \mathbf{0} & \mathbf{0} & \varrho\_3 & \varrho\_2 & \mathbf{0} \end{bmatrix} \tag{12}$$

where *<sup>φ</sup>*<sup>1</sup> <sup>¼</sup> *<sup>e</sup><sup>j</sup>θ*<sup>1</sup> , *<sup>φ</sup>*<sup>2</sup> <sup>¼</sup> *<sup>e</sup><sup>j</sup>θ*<sup>2</sup> and *<sup>φ</sup>*<sup>3</sup> <sup>¼</sup> *<sup>e</sup><sup>j</sup>θ*<sup>3</sup> . Traditionally, *<sup>θ</sup>*<sup>1</sup> <sup>¼</sup> 0, *<sup>θ</sup>*<sup>2</sup> <sup>¼</sup> *<sup>π</sup>* <sup>3</sup> , and *<sup>θ</sup>*<sup>3</sup> <sup>¼</sup> <sup>2</sup>*<sup>π</sup>* 3 . In this circumstance, the codebook of user 1, for instance, is calculated based on the following mapping and transformation matrices,

$$\mathbf{V}\_1 = \begin{bmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{1} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} \end{bmatrix} \qquad \text{and} \qquad \mathbf{T}\_1 = \begin{bmatrix} e^{j\theta\_2} & \mathbf{0} \\ \mathbf{0} & e^{j\theta\_1} \end{bmatrix}.$$

It is worth noting that not only non-zero entries in each row of **F***<sup>T</sup>* are different but also those in each column, this property is called the Latin criterion. This means that the power variation and dimensional dependency can be controlled at the same time without compromising the Euclidean distance profile of the multi-dimensional mother constellation [8, 19]. **Figure 7** illustrates an example of SCMA system with 6 users (*J* ¼ 6), their codebooks of size 4 (*M* ¼ 4) are generated based on T4QAM mother constellation (which is described in **Table 1**) by employing unitary rotation matrices as described in eq. (11). This figure depicts how the constellation of each user is projected on each one between its associated two REs (*N* ¼ 2). More details on these codebooks are given in Appendix A.

Multi-user codebooks generation can be further refined by optimizing computer-designed rotation matrices instead of the unitary rotation ones [12]. Furthermore, if the communication channel is assumed to be known, its phases can be exploited to extract random rotation angles which are used to generate the codebooks from the mother constellation. The authors in [18] combined the SCMA design with a form of codebook encryption which can ensure the link with low complexity. The rotation operations are not the only ones that can be employed. In [20], the transformation operator is designed based on a permutation set which is optimized to improve the detection reliability of the first decoded user, this largely improves the performance of SCMA receiver. The proposed design criterion tries to maximize the sum of distances among codewords which are multiplexed on the same RE (sum of distances per dimension). The factor graph matrix, **F**, defines the positions of non-zero elements of each user which are assumed to be fixed in the majority of SCMA designs as explained in subsection 3.1. One way to design transformation operator is to differentiate the non-zero locations according to the values of transmitted data bits which is considered as a permutation-based SCMA scheme [21]. This permutation approach does not suffer from a complexity overhead when compared to traditional one, however spectral efficiency does improve. This effort was extended by combining the matrix permutation and rotation operations to define the transformation operator as in [15, 16].

#### **Figure 7.**

*An example of SCMA system with 6 users (J* ¼ 6*), their codebooks of size 4 (M* ¼ 4*) are generated based on T4QAM mother constellation by applying unitary rotation matrices, as described in eq. (11). This figure depicts how the constellation of each user is projected on each one between its associated two REs (N* ¼ 2*).*

#### **Figure 8.**

*BER as a function of SNR for SCMA system with different codebook designs: The number of orthogonal REs is 4, the number of users is 6, and the channel fading is assumed to be Rayleigh distributed.*

Nevertheless, the objective is to minimize the rate of wrongly detected bits which is also called bit error rate (BER). For a given codeword error rate the BER depends on how codewords are labeled, hence it is obvious that choosing the appropriate labeling is an another aspect to be studied. In [17], the labeling was optimized to adjust the slope of the extrinsic information transfer (EXIT) chart.

The influence of the codebook design on the performance of a SCMA system is confirmed through simulations. **Figure 8** depicts the BER as a function of the SNR with different codebooks through Rayleigh fading channel. The star-QAM based SCMA design outperforms the other ones.

### **4. SCMA decoder design**

It is worth mentioning that the SCMA encoder and decoder are two blocks among other ones at, respectively, the transmitter and receiver as explained in the block diagram illustrated in **Figure 9**. At the receiver side, the SCMA codewords must be segregated or decoded, this operation is preceded by the OFDM demodulation and the channel estimation, and followed by the deinterleaving and channel decoding.

At the transmitter, the bit to codeword mapping for each user is followed by the superimposition of the *J* mapped codewords which are selected from one of the above presented SCMA codebooks. At the receiver, the SCMA decoder aims to separate the

**Figure 9.**

*This diagram illustrates the different essential blocks of the transmitter and receiver of SCMA system.*

superimposed codewords despite that several users are occupying the same REs as described by the factor graph matrix. Most existing mechanisms employed for SCMA decoders are based on MPA or one of its variations, or its combination with other methods. In this section, we will present the basics of SCMA decoding.

### **4.1 Message passing algorithm**

### *4.1.1 Traditional MPA*

MPA is an iterative method based on passing some messages among concerned nodes. The nodes are the users, which are also considered as the variables nodes (VNs), and the subcarrier (or REs), which are also considered as the function nodes (FNs), and the massages are the extrinsic information among nodes, this mechanism is illustrated in **Figure 2**. Tha idea is that each FN calculates its outgoing message to a given VN depending on the incoming messages received from the reminder of VNs. The later ones will play the reciprocal role, that is each VN will reply by sending a message which is computed based on the received messages from the rest of FNs. This exchange among all the edges, i.e. all VNs and all FNs, is repeated at each iteration. After a given number of iterations, the bits of each user are estimated through the loglikelihood-rates (LLRs) of each coded bit. The MPA method is shown in Algorithm 1 and is based on three main steps: initialization, iterative message passing along edges and decision making.

**Algorithm 1**: Message Passing Algorithm.

**Input: y**, *N*0, *Cj*, *hj*, *j* ¼ 1, ⋯, *J*, *N*iter.

Estimation of the bits which were transmitted by each user. **Definitions.** Users are represented by VNs, subcarriers are represented by FNs,

Uð Þ¼ *k* f g all the VNs which are connected to FN*<sup>k</sup>* , *k* ¼ 1, ⋯,*K*,

RðÞ¼ *j* all the FNs which are connected to VN*<sup>j</sup>* � �, *<sup>j</sup>* <sup>¼</sup> 1, <sup>⋯</sup>, *<sup>J</sup>:*

### **Step 1: Initialization.**

Initially, each user expects to equally receive any codeword among the *M* ones:

$$\mathbb{V}\_{j \to k}^{0} \left( \mathbf{x}\_{j}^{(m)} \right) = \mathbb{P} \left( \mathbf{x}\_{j}^{(m)} \right) = \frac{1}{M}, j = 1, \cdots, J, k \in \mathcal{R}(j)$$

**Step 2: Extrinsic information exchange among VNs and FNs**

$$t \le N\_{\text{iter}}$$

1.The message to be sent from FN*k*, *k* ¼ 1, ⋯, *K*, to VN*j*, *j*∈Uð Þ*k* , for each codeword **x**ð Þ *<sup>m</sup> <sup>j</sup>* ∈ *Cj*, *m* ¼ 1, ⋯, *M*, is computed by,

$$\mathcal{U}\_{k \to j}^{t} \left( \mathbf{x}\_{j}^{(m)} \right) = \sum\_{\mathbf{x}\_{i}^{(m)} \mid i \in \mathcal{U}(k)} \exp \left\{ -\frac{1}{N\_{0}} \| y\_{k} - \sum\_{j} h\_{j,k} \mathbf{x}\_{j,k}^{(m)} \|^{2} \right\} \prod\_{i \in \mathcal{U}(k) \backslash j} V\_{i \to k}^{t-1} \left( \mathbf{x}\_{i}^{(m)} \right),$$

2.The message to be sent from VN*j*, *j* ¼ 1, ⋯, *J* to FN*k*, *k*∈ Rð Þ*j* , for each codeword **x**ð Þ *<sup>m</sup> <sup>j</sup>* ∈ *Cj*, *m* ¼ 1, ⋯, *M*, is calculated as,

*Multi-Dimensional Codebooks for Multiple Access Schemes DOI: http://dx.doi.org/10.5772/intechopen.110032*

$$V\_{j \longrightarrow k}^t \left( \mathbf{x}\_j^{(m)} \right) = \frac{\prod\_{i \in \mathcal{R}(j) \backslash k} U\_{i \longrightarrow j}^{t-1} \left( \mathbf{x}\_j^{(m)} \right)}{\sum\_{\mathbf{x}\_j^{(l)} \in \mathbf{C}\_j} \prod\_{i \in \mathcal{R}(j) \backslash k} U\_{i \longrightarrow j}^{t-1} \left( \mathbf{x}\_j^{(l)} \right)} \cdot 1$$

It is essential to normalize this message in order to guarantee the numerical stability of MPA.

#### **Step 3: Received bits estimation**

1.The posteriori probability of each codeword for each user is represented by,

$$\mathbb{P}\left(\mathbf{x}\_{\boldsymbol{\jmath}}^{(m)}\right) = \prod\_{k \in \mathcal{R}(\boldsymbol{\jmath})} U\_{k \to \boldsymbol{\jmath}}^{N\_{\text{iter}}} \left(\mathbf{x}\_{\boldsymbol{\jmath}}^{(m)}\right), m = 1, \dots, M, \boldsymbol{\jmath} = 1, \dots, J.$$

2.Log-Likelihood-Rate for each coded bit, *bi*, 1≤*i*≤ log <sup>2</sup>ð Þ *M* , is given by,

$$\text{LLR}(b\_i) = \log\left(\frac{\mathbb{P}(b\_i = 0)}{\mathbb{P}(b\_i = 1)}\right) = \log\left(\frac{\sum\_{\begin{subarray}{c} \mathbf{x}\_j^{(m)} \in \mathbf{C}\_j \mid b\_i = 0 \end{subarray}} \mathbb{P}\left(\mathbf{x}\_j^{(m)}\right)}{\sum\_{\begin{subarray}{c} \mathbf{x}\_j^{(m)} \in \mathbf{C}\_j \mid b\_i = 1 \end{subarray}} \mathbb{P}\left(\mathbf{x}\_j^{(m)}\right)}\right)$$

3.Finally, the value of each LLR is employed to decide on the corresponding bit as following,

$$
\hat{b}\_i = \begin{cases}
\mathbf{1} & \text{if } \text{ LLR}(b\_i) \le \mathbf{0} \\
& \mathbf{0} \text{ otherwise.}
\end{cases}
$$

#### *4.1.2 Variations of MPA*

Despite being a referent decoder for SCMA, the complexity evaluation of MPA reveals that it relies on a large number of exponential calculus which are of high complexity. With the challenge to reduce this complexity and to fit with critical requirements of future wireless networks, several variations of MPA were proposed, among them we present here, the Max-Log-MPA and Log-MPA methods [22].

⊛ **Max-Log-MPA**: It is a simplified version of MPA based on a mathematical simplification which approximates the logarithm of a sum of exponential operations into a maximum operation. The key purpose is to move the iterative decoding process into logarithmic domain which eliminates the exponential terms in MPA by employing the simplified formula of *Jacobean logarithm*,

$$\log(\exp(a\_1) + \dots + \exp(a\_n)) \approx \max(a\_1, \dots, a\_n) \tag{13}$$

Thus, passing numerous messages from FNs to VNs, and vice versa, will be very less expensive in term of complexity. Based on (12), the expression of LLRð Þ *bi* presented in Algorithm 1 is modified as follows,

$$\text{LLR}(b\_i) = \max\_{\left\{ \mathbf{x}\_j^{(m)} \in \mathsf{C}\_j \middle| b\_i = 0 \right\}} \left( \log \left( \mathbb{P} \left( \mathbf{x}\_j^{(m)} \right) \right) \right) - \max\_{\left\{ \mathbf{x}\_j^{(m)} \in \mathsf{C}\_j \middle| b\_i = 1 \right\}} \left( \log \left( \mathbb{P} \left( \mathbf{x}\_j^{(m)} \right) \right) \right) \tag{14}$$

⊛ **Log-MPA**: The approximation of the *Jacobean logarithm formula* as presented in (12) makes the Max-Log-MPA a sub-optimal solution and results in a performance degradation. To mitigate this issue, a correction term was added by using another *Jacobean logarithm formula*. The adopted approximation is given by,

$$\log(\exp(a\_1) + \dots + \exp(a\_n)) = a\_j + \log\left(1 + \sum\_{i \in \{1\dots n\} \backslash j} \exp\left(-|a\_j - a\_i|\right)\right) \tag{15}$$

where *aj* ¼ maxð Þ *a*1, … , *an* . Hence, the LLRs are further updated to be as below, rather than as in (13),

$$\begin{split} \text{LLR}(b\_{l}) &= \left[ \max \left\{ \mathbf{x}\_{j}^{(m)} \in \mathbf{C}\_{l} | b\_{i} = 0 \right\} \left( \log \left( \mathbb{P} \left( \mathbf{x}\_{j}^{(m)} \right) \right) \right) + \\ & \log \left( 1 + \sum\_{m' \in \{1..M\} \cup \mathbf{w}} \exp \left( - \left| \log \left( \mathbb{P} \left( \mathbf{x}\_{j}^{(m)} \right) \right) - \log \left( \mathbb{P} \left( \mathbf{x}\_{j}^{(m')} \right) \right) \right| \right) \right) \right] \\ & - \left[ \max \left\{ \mathbf{x}\_{j}^{(m)} \in \mathbf{C}\_{l} | b\_{i} = 1 \right\} \left( \log \left( \mathbb{P} \left( \mathbf{x}\_{j}^{(m)} \right) \right) \right) + \\ & \log \left( 1 + \sum\_{m' \in \{1..M\} \cup \mathbf{w}} \exp \left( - \left| \log \left( \mathbb{P} \left( \mathbf{x}\_{j}^{(m)} \right) \right) \right| - \log \left( \mathbb{P} \left( \mathbf{x}\_{j}^{(m')} \right) \right) \right) \right) \right] \end{split} \tag{16}$$

The performance of the above-presented variations of MPA, namely MPA, Log-MPA and Max-Log-MPA, were evaluated, **Figure 10** depicts the BER as a function of SNR through Rayleigh fading channel. The results show that the performance of Log-MPA is near-optimum when compared to that of MPA. The same is not valid for Max-Log-MPA, this can be explained by the correction term that was added to Log-MPA which obviously results in a performance compensation. On the other hand, the performance degradation of Log-MPA and Max-Log-MPA, due to the approximation used for each method, can be neglected in the high SNR zone. However, the Max-Log-MPA is still useful since it requires less computational effort when compared to Log-MPA which is still sufficiently complex to be considered challenging for energy-sensitive applications. It worth mentioning that reducing the computation complexity of the above-mentioned decoding methods is possible though reducing the value of *df* at the expense of largely constraining the codebook design.

Generally speaking, the complexity of MPA is intimately depending on the number of iterations which is usually one of its fixed parameters. This is not ideal since, on one hand, increasing the number of iterations will considerably increase the complexity, and on the other hand, not sufficiently iterating will lead to performance degradation. Hence, finding the near-optimal number of iterations is very useful. The performance of MPA as a function of SNR for different number of iterations is shown in **Figure 11**

#### **Figure 10.**

*BER as a function of SNR of MPA, log-MPA and MAX-log-MPA variations: The number of orthogonal REs is 4, the number of users is 6, and the channel fading is assumed to be Rayleigh distributed.*

#### **Figure 11.**

*Evaluation of the number of iterations on MPA performance: The number of orthogonal REs is 4, the number of users is 6, and the channel fading is assumed to be AWGN and Rayleigh distributed.*

when the channel is assumed to be Gaussian or Rayleigh distributed. It is observed that the BER is lesser when the number of iterations increases under the two channel assumptions, nevertheless, beyond a certain limit, the performance improvement hits an upper bound. The same conclusions are valid for Log-MPA and Max-Log-MPA as reported in [22]. Therefore, a good compromise is to set the number of iterations to 4. Another approach is to supervise the convergence rate such that the number of iterations can be adjusted accordingly, the flexible number of iterations can be powerful when the convergence rate is efficiently measured.

### **5. Conclusions**

In this Chapter, we presented the structure and basic principles of SCMA. Then, SCMA encoder and decoder designs were reviewed through their most known techniques. A simulations-based comparison among different existing approaches for codebook design as well as for signal decoding was conducted.

### **Nomenclature**


## **A. Appendix**

To better illustrates SCMA mapping, a numerical example of complete SCMA codebooks, as depicted in **Figures 5** and **7**, is provided in the following (**Figure 12**).

**Figure 12.** *The 2-dimensional codebooks with 4-codewords, generated for J* ¼ 6 *users, as described in Figure 7.*

### **Author details**

Kais Hassan<sup>1</sup> \*, Kosai Raoof<sup>1</sup> and Pascal Chargé<sup>2</sup>

1 Laboratoire d'Acoustique de l'Université du Mans (LAUM), Le Mans University, Le Mans, France

2 Institut d'Electronique et des Technologies du numéRique (IETR), Polytech Nantes, Graduate School of Engineering of Nantes University, Nantes, France

\*Address all correspondence to: kais.hassan@univ-lemans.fr

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Rebhi M, Hassan K, Raoof K, Chargé P. Sparse code multiple access: Potentials and challenges. IEEE Open Journal of the Communications Society. 2021;**2**:1205-1238

[2] Rebhi M, Hassan K, Raoof K, Chargé P. Deep learning for a fair distance-based SCMA detector. In: Proceedings of IEEE Wireless Communications and Networking Conference (WCNC). Austin-Texas, USA; April 2022

[3] Rebhi M, Hassan K, Raoof K, Chargé P. An adaptive uplink SCMA scheme based on channel state information. In: Proceedings of URSI, Future Network: 5G beyond Workshop. Paris, France; Mars 2020. pp. 1-7

[4] Vameghestahbanati M, Marsland ID, Gohary RH, Yanikomeroglu H. Multidimensional constellations for uplink scma systems—A comparative study. IEEE Communications Surveys Tutorials. 2019;**21**(3):2169-2194. DOI: 10.1109/COMST.2019.2910569

[5] Beko M, Dinis R. Designing good multi-dimensional constellations. IEEE Wireless Communications Letters. 2012; **1**(3):221-224

[6] Peng J, Chen W, Bai B, Guo X, Sun C. Joint optimization of constellation with mapping matrix for SCMA codebook design. IEEE Signal Processing Letters. 2017;**24**(3):264-268. DOI: 10.1109/ LSP.2017.2653845

[7] Nikopour H, Baligh H. Sparse code multiple access. In: Proceedings of IEEE 24th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), London, UK. 2013. pp. 332-336

[8] Taherzadeh M, Nikopour H, Bayesteh A, Baligh H. Scma codebook design. In: Proceedings of 2014 IEEE 80th Vehicular Technology Conference (VTC2014-Fall), Vancouver, BC, Canada. 2014. pp. 1-5

[9] Taherzadeh M, Nikopour H, Bayesteh A, Baligh A. System. Method for Designing and Using Multidimensional Constellations. USA: U.S Patent 9,509,379; November 2016

[10] Metkarunchit T. Scma codebook design base on circular-qam. In: Proceedings of 2017 Integrated Communications, Navigation and Surveillance Conference (ICNS), Herndon, VA, USA. 2017. pp. 3E1– 1-3E1–8

[11] Wei F, Chen W. Low complexity iterative receiver design for sparse code multiple access. IEEE Transactions on Communications. 2017;**65**(2):621-634

[12] Bao J, Ma Z, Ding Z,

Karagiannidis GK, Zhu Z. On the design of multiuser codebooks for uplink sCMA systems. IEEE Communications Letters. 2016;**20**(10):1920-1923

[13] Bao J, Ma Z, Mahamadu MA, Zhu Z, Chen D. Spherical codes for scma codebook. In: Proceedings of 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring), Nanjing, China. 2016. pp. 1-5

[14] Vameghestahbanati M. Hypercubebased multidimensional constellation design for uplink scma systems. In: Proceedings of 2020 IEEE International Conference on Communications (ICC), Dublin, Ireland. 2020

[15] Yu L, Lei X, Fan P, Chen D. An optimized design of scma codebook based on star-qam signaling constellations. In: Proceedings of 2015 International Conference on Wireless Communications Signal Processing (WCSP), Nanjing, China. 2015. pp. 1-5

[16] Yu L, Fan P, Cai D, Ma Z. Design and analysis of scma codebook based on starqam signaling constellations. IEEE Transactions on Vehicular Technology. 2018;**67**(11):10543-10553. DOI: 10.1109/ TVT 2018.2865920

[17] Bao J, Ma Z, Xiao M, Tsiftsis TA, Zhu Z. Bit-interleaved coded scma with iterative multiuser detection: Multidimensional constellations design. IEEE Transactions on Communications. 2018;**66**(11):5292-5304

[18] Lai K, Lei J, Wen L, Chen G, Li W, Xiao P. Secure transmission with randomized constellation rotation for downlink sparse code multiple access system. IEEE Access. 2018;**6**:5049-5063

[19] Xiao K, Xia B, Chen Z, Xiao B, Chen D, Ma S. On capacity-based codebook design and advanced decoding for sparse code multiple access systems. IEEE Transactions on Wireless Communications. 2018;**17**(6):3834-3849

[20] Yan C, Kang G, Zhang N. A dimension distance-based scma codebook design. IEEE Access. 2017;**5**: 5471-5479

[21] Kulhandjian M, D'Amours C. Design of permutation-based sparse code multiple access system. In: Proceedings of 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Montreal, QC, Canada. 2017. pp. 1-6

[22] Ameur WB, Mary P, Dumay M, Hélard J, Schwoerer J. Performance study of mpa, log-mpa and max-log-mpa for an uplink scma scenario. In: Proceedings of 26th International Conference on Telecommunications (ICT), Hanoi, Vietnam. 2019. pp. 411-416

### **Chapter 7**

## Polynomials Related to Generalized Fibonacci Sequence

*Manjeet Singh Teeth and Sanjay Harne*

### **Abstract**

The Fibonacci polynomials are a polynomial sequence that can be considered as a generalization of the Fibonacci numbers. Fibonacci polynomials are defined by a recurrence relation: *Fn*ð Þ¼ *x xFn*�<sup>1</sup>ð Þþ *x Fn*�<sup>2</sup>ð Þ *x* , *n* ≥2 where *F*<sup>0</sup> ¼ 0, *F*<sup>1</sup> ¼ 1. The first few Fibonacci polynomials are *F*<sup>0</sup> ¼ 0, *F*0ð Þ¼ *x* 0, *F*1ð Þ¼ *x* 1, *F*2ð Þ¼ *x x*, *<sup>F</sup>*3ð Þ¼ *<sup>x</sup> <sup>x</sup>*<sup>2</sup> <sup>þ</sup> 1. In this chapter, we extend the Fibonacci recurrence relation to define the sequence {*Kn*} and will derive some properties of this sequence. We also define four comparison sequences {*Pn*}, {*Qn*}, {*Rn*}, and {*Sn*} and obtain some identities with the help of generating matrix.

**Keywords:** Fibonacci numbers, Fibonacci sequence, generating matrix, rabbit problem, Polynomials

### **1. Introduction**

The Fibonacci sequence [1] receives its name from Leonardo Pisano, known as Fibonacci who was the most talented Italian mathematician of middle age. It is supposed that he was the first mathematician who introduced the Hindu-Arabic system of numbers to Italians. His work 'Liber-Abaci' (1202) is famous for this.

In the Liber Abaci, Leonardo states the famous "Rabbit Problem" for attaining the output of this rabbit problem.

### **1.1 Utilization of Fibonacci sequence in the study of famous rabbit problem**

"How many pairs of rabbits are born of one pair in a year?" This problem is stated in the form: "Suppose a newly-born pair of rabbits, one male and one female, are put in a field. Rabbits are able to mate at the age of 1 month so that at the end of its second month a female can produce another pair of rabbits."

Suppose that our rabbits never die and that the female always produces one new pair (one male and one female) every month from the second month on.

Leonardo also gave the solution to this problem and obtained the sequence of numbers as a result:

$$1, 1, 2, 3, 5, 8, \dots$$

This sequence is called the Fibonacci sequence. The Fibonacci sequence is defined by the recurrence relation as,

$$F\_n = F\_{n-1} + F\_{n-2}, n > 1.$$

Waddilli, M.E. [2] has extended the Fibonacci recurrence relation to define the sequence {*Kn*}, where,

$$K\_n = K\_{n-1} + K\_{n-2} + K\_{n-3}, n > 3 \tag{1}$$

where, *K*0, *K*1, *K*<sup>2</sup> are given arbitrary algebraic integers.

Jaiswal, D.V. [3] has extended Fibonacci recurrence relation to define the sequence {*Q*0}, where,

$$Q\_n = Q\_{n-1} + Q\_{n-2} + Q\_{n-3} + Q\_{n-4}, n > 4\tag{2}$$

where, *Q*0, *Q*1, *Q*<sup>2</sup> are given arbitrary algebraic integers.

Harne, S. [4] has extended Fibonacci recurrence relation to define the sequence {*Dn*}, where,

$$D\_n = D\_{n-1} + D\_{n-2} + D\_{n-3} + D\_{n-4} + Q\_{n-5}, n > 5\tag{3}$$

where, *D*0, *D*1, *D*<sup>2</sup> are given arbitrary algebraic integers.

In this chapter, Teeth MS. [5] shall further extend the Fibonacci recurrence relation [6–10] to define the sequence {*Cn*} and shall discuss some properties of this sequence. We shall also consider the four comparison sequences {*Pn*}, {*Qn*}, {*Rn*}, and {*Kn*}.

### **2. The generalized sequence as per our propose model {***Kn***}**

We consider the following sequence,

$$\{\mathbf{C}\_n\} = \mathbf{C}\_0, \mathbf{C}\_1, \mathbf{C}\_2, \mathbf{C}\_3, \dots, \mathbf{C}\_n$$

where, *C*0,*C*1,*C*2,*C*3,*C*4,*C*5, C0 are arbitrary algebraic integers all of which are not zero and

$$\mathbf{C}\_{n} = \mathbf{C}\_{n-1} + \mathbf{C}\_{n-2} + \mathbf{C}\_{n-3} + \mathbf{C}\_{n-4} + \mathbf{C}\_{n-5} + \mathbf{C}\_{n-6}, n \ge 6\tag{4}$$

We also consider the sequence f g *Pn* ¼ *P*0, *P*1, *P*2, *P*3, … *:*, *Pn:* where,

$$\begin{aligned} P\_0 &= \mathbf{C\_3} - \mathbf{C\_2} - \mathbf{C\_1} - \mathbf{C\_0} \\ P\_1 &= \mathbf{C\_4} - \mathbf{C\_3} - \mathbf{C\_2} - \mathbf{C\_1} \\ P\_2 &= \mathbf{C\_5} - \mathbf{C\_4} - \mathbf{C\_3} - \mathbf{C\_2} \\ P\_3 &= \mathbf{C\_6} - \mathbf{C\_5} - \mathbf{C\_4} - \mathbf{C\_3} \\ P\_4 &= \mathbf{C\_7} - \mathbf{C\_6} - \mathbf{C\_5} - \mathbf{C\_4} \\ P\_5 &= \mathbf{C\_7} + \mathbf{C\_7} + \mathbf{C\_8} + \mathbf{C\_7} + \mathbf{C\_8} + \mathbf{C\_9} \end{aligned} \tag{5}$$

$$\text{with}, P\_n = C\_{n-1} + C\_{n-2} + C\_{n-3} + C\_{n-4} + C\_{n-5}, n \ge 5 \tag{6}$$

and f g *Qn* ¼ *Q*0, *Q*1, *Q*2, *Q*3, … … *:*, *Qn* where,

$$\begin{aligned} Q\_0 &= \mathbf{C\_4} - \mathbf{C\_3} - \mathbf{C\_2} - \mathbf{C\_1} \\ Q\_1 &= \mathbf{C\_5} - \mathbf{C\_4} - \mathbf{C\_3} - \mathbf{C\_2} - \mathbf{C\_1} \\ \mathbf{c\_1} &= \mathbf{c\_2} - \mathbf{c\_1} - \mathbf{c\_2} - \mathbf{c\_1} \end{aligned} \tag{7}$$

$$\text{with}, \qquad Q\_n = C\_{n-1} + C\_{n-2} + C\_{n-3} + C\_{n-4} \tag{8}$$

$$R\_0 = \mathbf{C\_5} - \mathbf{C\_4} - \mathbf{C\_3} - \mathbf{C\_2} - \mathbf{C\_1} - \mathbf{C\_0}$$

$$R\_1 = \mathbf{C\_6} - \mathbf{C\_5} - \mathbf{C\_4} - \mathbf{C\_3} - \mathbf{C\_2} - \mathbf{C\_1}$$

$$R\_2 = \mathbf{C\_7} - \mathbf{C\_6} - \mathbf{C\_5} - \mathbf{C\_4} - \mathbf{C\_3} - \mathbf{C\_2} \tag{9}$$

$$R\_3 = \mathbf{C\_8} - \mathbf{C\_7} - \mathbf{C\_6} - \mathbf{C\_5} - \mathbf{C\_4} - \mathbf{C\_3}$$

$$R\_4 = \mathbf{C\_9} - \mathbf{C\_8} - \mathbf{C\_7} - \mathbf{C\_6} - \mathbf{C\_5} - \mathbf{C\_4}$$

$$\text{with, } R\_n = \mathbf{C\_{n-1}} + \mathbf{C\_{n-2}} + \mathbf{C\_{n-3}} \tag{10}$$

$$\begin{aligned} \mathbf{S}\_{0} &= \mathbf{C}\_{6} - \mathbf{C}\_{5} - \mathbf{C}\_{4} - \mathbf{C}\_{3} - \mathbf{C}\_{2} - \mathbf{C}\_{1} - \mathbf{C}\_{0} \\ \mathbf{S}\_{1} &= \mathbf{C}\_{7} - \mathbf{C}\_{6} - \mathbf{C}\_{5} - \mathbf{C}\_{4} - \mathbf{C}\_{3} - \mathbf{C}\_{2} - \mathbf{C}\_{1} \\ \mathbf{S}\_{2} &= \mathbf{C}\_{8} - \mathbf{C}\_{7} - \mathbf{C}\_{6} - \mathbf{C}\_{5} - \mathbf{C}\_{4} - \mathbf{C}\_{3} - \mathbf{C}\_{2} \\ \mathbf{S}\_{3} &= \mathbf{C}\_{9} - \mathbf{C}\_{8} - \mathbf{C}\_{7} - \mathbf{C}\_{6} - \mathbf{C}\_{5} - \mathbf{C}\_{4} - \mathbf{C}\_{3} \\ \mathbf{S}\_{4} &= \mathbf{C}\_{10} - \mathbf{C}\_{9} - \mathbf{C}\_{8} - \mathbf{C}\_{7} - \mathbf{C}\_{6} - \mathbf{C}\_{5} - \mathbf{C}\_{4} \end{aligned} \tag{11}$$

$$\text{with}, \mathbf{S}\_{n} = \mathbf{C}\_{n-1} + \mathbf{C}\_{n-2}, n \ge 2 \tag{12}$$

$$P\_n = \mathbf{C}\_{n-2} + \mathbf{C}\_{n-3} + \mathbf{C}\_{n-4} + \mathbf{C}\_{n-5} + \mathbf{C}\_{n-6} + \mathbf{C}\_{n-7} + \mathbf{C}\_{n-9} + \mathbf{C}\_{n-4} + \mathbf{C}\_{n-6} + \mathbf{C}\_{n-6}$$

$$+ \mathbf{C}\_{n-7} + \mathbf{C}\_{n-8} + \mathbf{C}\_{n-4} + \mathbf{C}\_{n-5} + \mathbf{C}\_{n-6} + \mathbf{C}\_{n-7} + \mathbf{C}\_{n-8} + \mathbf{C}\_{n-9} + \mathbf{C}\_{n-5} + \mathbf{C}\_{n-6}$$

$$+ \mathbf{C}\_{n-7} + \mathbf{C}\_{n-8} + \mathbf{C}\_{n-9} + \mathbf{C}\_{n-10} + \mathbf{C}\_{n-6} + \mathbf{C}\_{n-7} + \mathbf{C}\_{n-8} + \mathbf{C}\_{n-9} + \mathbf{C}\_{n-10} + \mathbf{C}\_{n-11}$$

$$P\_n = P\_{n-1} + P\_{n-2} + P\_{n-3} + P\_{n-4} + P\_{n-5} + P\_{n-6}$$

$$P\_{10} = \left(\mathbf{C\_8} + \mathbf{C\_7} + \mathbf{C\_6} + \mathbf{C\_5} + \mathbf{C\_4}\right) + \left(\mathbf{C\_7} + \mathbf{C\_6} + \mathbf{C\_5} + \mathbf{C\_4} + \mathbf{C\_3}\right)$$

$$+ \left(\mathbf{C\_6} + \mathbf{C\_5} + \mathbf{C\_4} + \mathbf{C\_3} + \mathbf{C\_2}\right) + \left(\mathbf{C\_5} + \mathbf{C\_4} + \mathbf{C\_3} + \mathbf{C\_2} + \mathbf{C\_1}\right)$$

$$+ \left(\mathbf{C\_4} + \mathbf{C\_3} + \mathbf{C\_2} + \mathbf{C\_1} + \mathbf{C\_0}\right) + \left(\mathbf{C\_7} - \mathbf{C\_6} - \mathbf{C\_5} - \mathbf{C\_4}\right)$$

$$P\_{10} = P\_9 + P\_8 + P\_7 + P\_6 + P\_5 + P\_4$$

$$\text{Similarly, } P\_9 = P\_8 + P\_7 + P\_6 + P\_5 + P\_4 + P\_3$$

$$P\_8 = P\_7 + P\_6 + P\_5 + P\_4 + P\_3 + P\_2$$

$$P\_7 = P\_8 + P\_5 + P\_4 + P\_3 + P\_2 + P\_1$$

$$P\_n = P\_{n-1} + P\_{n-2} + P\_{n-3} + P\_{n-4} + P\_{n-5} + P\_{n-6} \tag{13}$$

Proceeding on similar lines, it can be shown that for *n* ≥6.

$$\begin{aligned} \mathbf{Q}\_{n} &= \mathbf{C}\_{n-2} + \mathbf{C}\_{n-3} + \mathbf{C}\_{n-4} + \mathbf{C}\_{n-5} + \mathbf{C}\_{n-6} + \mathbf{C}\_{n-7} \\ &+ \mathbf{C}\_{n-3} + \mathbf{C}\_{n-4} + \mathbf{C}\_{n-5} + \mathbf{C}\_{n-6} + \mathbf{C}\_{n-7} + \mathbf{C}\_{n-8} \\ &+ \mathbf{C}\_{n-4} + \mathbf{C}\_{n-5} + \mathbf{C}\_{n-6} + \mathbf{C}\_{n-7} + \mathbf{C}\_{n-8} + \mathbf{C}\_{n-9} \\ &+ \mathbf{C}\_{n-5} + \mathbf{C}\_{n-6} + \mathbf{C}\_{n-7} + \mathbf{C}\_{n-8} + \mathbf{C}\_{n-9} + \mathbf{C}\_{n-10} \\ \mathbf{Q}\_{n} &= \mathbf{Q}\_{n-1} + \mathbf{Q}\_{n-2} + \mathbf{Q}\_{n-3} + \mathbf{Q}\_{n-4} + \mathbf{Q}\_{n-5} + \mathbf{Q}\_{n-6}, n \geq 6 \end{aligned} \tag{14}$$

Proceeding on similar lines it can be shown that for *n*≥ 6

$$\begin{aligned} R\_n &= C\_{n-2} + C\_{n-3} + C\_{n-4} + C\_{n-5} + C\_{n-6} + C\_{n-7} \\ &+ C\_{n-3} + C\_{n-4} + C\_{n-5} + C\_{n-6} + C\_{n-7} + C\_{n-8} \\ &+ C\_{n-4} + C\_{n-5} + C\_{n-6} + C\_{n-7} + C\_{n-8} + C\_{n-9} \\ R\_n &= R\_{n-1} + R\_{n-2} + R\_{n-3} + R\_{n-4} + R\_{n-5} + R\_{n-6}, n \geq 6 \end{aligned} \tag{15}$$

Proceeding on similar lines it can be shown that for *n*≥ 6

$$\begin{aligned} \mathbf{S}\_{n} &= \mathbf{C}\_{n-2} + \mathbf{C}\_{n-3} + \mathbf{C}\_{n-4} + \mathbf{C}\_{n-5} + \mathbf{C}\_{n-6} + \mathbf{C}\_{n-7} \\ &+ \mathbf{C}\_{n-3} + \mathbf{C}\_{n-4} + \mathbf{C}\_{n-5} + \mathbf{C}\_{n-6} + \mathbf{C}\_{n-7} + \mathbf{C}\_{n-8}, n \ge 6 \\ \mathbf{S}\_{n} &= \mathbf{S}\_{n-1} + \mathbf{S}\_{n-2} + \mathbf{S}\_{n-3} + \mathbf{S}\_{n-4} + \mathbf{S}\_{n-5} + \mathbf{S}\_{n-6}, n \ge 6 \end{aligned} \tag{16}$$

Thus, the four sequences {*Pn*}, {*Qn*}, {*Rn*}, and {*Sn*} are special cases of sequence {*Cn*} and all obtained by taking different initial values [11, 12].

On taking,

*C*<sup>0</sup> ¼ *C*<sup>1</sup> ¼ *C*<sup>2</sup> ¼ 0,*C*<sup>3</sup> ¼ *C*<sup>4</sup> ¼ 1,*C*<sup>5</sup> ¼ 2 *C*<sup>0</sup> ¼ *C*<sup>1</sup> ¼ 0,*C*<sup>2</sup> ¼ 1,*C*<sup>3</sup> ¼ 0,*C*<sup>4</sup> ¼ 1,*C*<sup>5</sup> ¼ 2 *C*<sup>0</sup> ¼ 0,*C*<sup>1</sup> ¼ 1, *C*<sup>2</sup> ¼ *C*<sup>3</sup> ¼ 0,*C*<sup>4</sup> ¼ 1,*C*<sup>5</sup> ¼ 2 *C*<sup>0</sup> ¼ 1,*C*<sup>1</sup> ¼ *C*<sup>2</sup> ¼ *C*<sup>3</sup> ¼ 0,*C*<sup>4</sup> ¼ 1,*C*<sup>5</sup> ¼ 2 *C*<sup>0</sup> ¼ *C*<sup>1</sup> ¼ *C*<sup>2</sup> ¼ *C*<sup>3</sup> ¼ 0,*C*<sup>4</sup> ¼ 1,*C*<sup>5</sup> ¼ 2 (17) 0,0,0,1,1,2,4,8,16,32,63, … *Jn*, … 0,0,1,0,1,2,4,8,16,31,62, … *Kn*, … 0,1,0,0,1,2,4,8,15,30,59, … *Ln*, … 1,0,0,0,1,2,4,7,14,28,56, … *Mn*, … 0,0,0,0,1,2,3,6,12,24,48, … *Nn*, …

Here, we find that

$$K\_n = f\_{n-1} + f\_{n-2} + f\_{n-3} + f\_{n-4} + f\_{n-5}$$

$$L\_n = f\_{n-1} + f\_{n-2} + f\_{n-3} + f\_{n-4}$$

$$M\_n = f\_{n-1} + f\_{n-2} + f\_{n-3}$$

$$N\_n = f\_{n-1} + f\_{n-2}$$

**110**

Hence, we say that {*Jn*} is *Cn* type sequence, while {*Kn*} is *Pn* type sequence, and {*Ln*} is *Qn* type sequence, while {*Mn*} is *Rn* type sequence, and {*Nn*} is *Sn* type sequence.

#### **2.1 Linear sums and some properties**

We have derived simple properties [2, 13, 14] of the sequences {*Cn*}, {*Pn*}, {*Qn*}, {*Rn*}, and{*Sn*}, expressing each of the terms *C*6,*C*7,*C*8, … *:*,*Cn*þ5C6, as the sum of its six preceding terms, as given in (4) adding both sides we obtained on

Simplification:

$$\sum\_{i=0}^{n} \mathbf{C}\_{i} = \frac{1}{5} \left\{ \mathbf{C}\_{n+5} - \mathbf{C}\_{n+3} - 2\mathbf{C}\_{n+2} - 3\mathbf{C}\_{n+1} + \mathbf{C}\_{n} - (\mathbf{C}\_{5} - \mathbf{C}\_{3} - 2\mathbf{C}\_{2} - 3\mathbf{C}\_{1} - 4\mathbf{C}\_{0}) \right\} \tag{18}$$

On using (4), (5), (7), (9), and (12), we get

$$\sum\_{i=0}^{n} \mathbf{C}\_{6i} = \sum\_{i=0}^{6n-1} \mathbf{C}\_{i} + \mathbf{C}\_{0} \tag{19}$$

$$\sum\_{i=0}^{n} \mathbf{C}\_{6i+2} = \sum\_{i=0}^{6n+1} \mathbf{C}\_i + P\_0 \tag{20}$$

$$\sum\_{i=0}^{n} \mathbf{C}\_{6i+3} = \sum\_{i=0}^{6n+2} \mathbf{C}\_{i} + \mathbf{Q}\_{0} \tag{21}$$

$$\sum\_{i=0}^{n} \mathbf{C}\_{6i+4} = \sum\_{i=0}^{6n+3} \mathbf{C}\_{i} + \mathbf{R}\_{0} \tag{22}$$

$$\sum\_{i=0}^{n} \mathbf{C}\_{6i+5} = \sum\_{i=0}^{6n+4} \mathbf{C}\_{i} + \mathbf{S}\_{0} \tag{23}$$

$$\sum\_{i=0}^{n} \mathbf{C}\_{6i+6} = \sum\_{i=0}^{6n+5} \mathbf{C}\_i + (\mathbf{S}\_1 - \mathbf{C}\_0) \tag{24}$$

$$\sum\_{i=0}^{n} \mathbf{C}\_{6i+5} = \sum\_{i=0}^{6n+4} \mathbf{C}\_i + (\mathbf{R}\_1 - \mathbf{C}\_0) \tag{25}$$

$$\sum\_{i=0}^{n} \mathbf{C}\_{6i+4} = \sum\_{i=0}^{6n+3} \mathbf{C}\_i + (\mathbf{Q}\_1 - \mathbf{C}\_0) \tag{26}$$

$$\sum\_{i=0}^{n} \mathbf{C}\_{6i+3} = \sum\_{i=0}^{6n+2} \mathbf{C}\_{i} (\mathbf{P}\_{1} - \mathbf{C}\_{0}) \tag{27}$$

## **2.2 Property of sequence {***Jn*�**2}**

**Theorem**: For the sequence {*Jn*} we have,

$$\begin{vmatrix} J\_n & J\_{n+1} & J\_{n+2} & J\_{n+3} & J\_{n+4} & J\_{n+5} \\\\ J\_{n+1} & J\_{n+2} & J\_{n+3} & J\_{n+4} & J\_{n+5} & J\_{n+6} \\\\ J\_{n+2} & J\_{n+3} & J\_{n+4} & J\_{n+5} & J\_{n+6} & J\_{n+7} \\\\ J\_{n+3} & J\_{n+4} & J\_{n+5} & J\_{n+6} & J\_{n+7} & J\_{n+8} \\\\ J\_{n+4} & J\_{n+5} & J\_{n+6} & J\_{n+7} & J\_{n+8} & J\_{n+9} \\\\ I\_{n+5} & J\_{n+6} & J\_{n+7} & J\_{n+8} & J\_{n+9} & J\_{n+10} \\\end{vmatrix} = (-1)^{n+1} \tag{28}$$

 

> 

Proof: Consider the determinant –

$$
\Delta = \begin{bmatrix}
\mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} \\
\mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\
\mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\
\mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\
\mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{0} \\
\mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{0}
\end{bmatrix}
$$

The value of this determinant is 1, we have

$$
\Delta^2 = \begin{bmatrix} 2 & 2 & 2 & 2 & 2 & 2 & 2 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ \\ 0 & 0 & 0 & 1 & 0 & 0 \end{bmatrix}
$$

Now, by mathematical induction,

$$
\Delta^{n} = \begin{vmatrix}
\begin{vmatrix}
\begin{matrix}
&\boldsymbol{f}\_{n+1} & \boldsymbol{K}\_{n+1} & \boldsymbol{L}\_{n+1} & \boldsymbol{M}\_{n+1} & \boldsymbol{N}\_{n+1} & \boldsymbol{J}\_{n} \\
\boldsymbol{J}\_{n} & \boldsymbol{K}\_{n} & \boldsymbol{L}\_{n} & \boldsymbol{M}\_{n} & \boldsymbol{N}\_{n} & \boldsymbol{J}\_{n-1} \\
\end{matrix} \\
\end{vmatrix}
$$

$$
\Delta^{n} = \begin{vmatrix}
\begin{matrix}
\boldsymbol{f}\_{n-1} & \boldsymbol{K}\_{n-1} & \boldsymbol{L}\_{n-1} & \boldsymbol{M}\_{n-1} & \boldsymbol{M}\_{n-1} & \boldsymbol{N}\_{n-1} & \boldsymbol{J}\_{n-2} \\
\boldsymbol{J}\_{n-2} & \boldsymbol{K}\_{n-2} & \boldsymbol{L}\_{n-2} & \boldsymbol{M}\_{n-2} & \boldsymbol{N}\_{n-2} & \boldsymbol{J}\_{n-3} \\
\end{matrix} \\
\end{vmatrix}
$$

$$
\begin{vmatrix}
\begin{matrix}
\boldsymbol{f}\_{n-3} & \boldsymbol{K}\_{n-3} & \boldsymbol{L}\_{n-3} & \boldsymbol{M}\_{n-3} & \boldsymbol{N}\_{n-3} & \boldsymbol{J}\_{n-4} \\
\end{matrix} \\
\end{vmatrix}
$$

$$
\begin{matrix}
\begin{matrix}
\boldsymbol{f}\_{n-4} & \boldsymbol{K}\_{n-4} & \boldsymbol{L}\_{n-4} & \boldsymbol{M}\_{n-4} & \boldsymbol{N}\_{n-4}
\end{matrix}
\end{matrix}
$$

Now, writing *Mn*þ<sup>1</sup> ¼ *Jn* þ *Jn*�<sup>1</sup> the R.H.S. can be written as the sum of two determinants, one of which is zero, Therefore,

$$
\Delta^n = \begin{vmatrix}
\begin{matrix}
\begin{matrix}
&f\_{n+1} & K\_{n+1} & L\_{n+1} & M\_{n+1} & J\_{n-1} & J\_n \\
\end{matrix} \\ \begin{matrix}
\begin{matrix}
&f\_n & K\_n & L\_n & M\_n & J\_{n-2} & J\_{n-1} \\
\end{matrix} \\ \begin{matrix}
\begin{matrix}
&f\_{n-1} & K\_{n-1} & L\_{n-1} & M\_{n-1} & J\_{n-3} & J\_{n-2} \\
\end{matrix} \\ \begin{matrix}
&f\_{n-2} & K\_{n-2} & L\_{n-2} & M\_{n-2} & J\_{n-4} & J\_{n-3} \\
\end{matrix} \\ 
\begin{matrix}
&f\_{n-3} & K\_{n-3} & L\_{n-3} & M\_{n-3} & J\_{n-4} & J\_{n-4} \\
\end{matrix} \\ 
\begin{matrix}
&f\_{n-4} & K\_{n-4} & L\_{n-4} & M\_{n-4} & J\_{n-6} & J\_{n-5} \\
\end{matrix}
\end{pmatrix}
$$

$$
\Delta^n = \begin{vmatrix}
f\_{n+1} & K\_{n+1} & L\_{n+1} & f\_{n-2} & f\_{n-1} & f\_n \\
\hline \\
f\_n & K\_n & L\_n & f\_{n-3} & f\_{n-2} & f\_{n-1} \\
\hline \\
f\_{n-1} & K\_{n-1} & L\_{n-1} & f\_{n-4} & f\_{n-3} & f\_{n-2} \\
f\_{n-2} & K\_{n-2} & L\_{n-2} & f\_{n-5} & f\_{n-4} & f\_{n-3} \\
\hline \\
f\_{n-3} & K\_{n-3} & L\_{n-3} & f\_{n-6} & f\_{n-5} & f\_{n-4} \\
\hline \\
f\_{n-4} & K\_{n-4} & L\_{n-4} & f\_{n-7} & f\_{n-6} & f\_{n-5}
\end{vmatrix}
$$

$$\Lambda^n = \begin{vmatrix} f\_{n+1} & K\_{n+1} & f\_{n-3} & f\_{n-2} & f\_{n-1} & f\_n \\ \\ f\_n & K\_n & f\_{n-4} & f\_{n-3} & f\_{n-2} & f\_{n-1} \\ \\ f\_{n-1} & K\_{n-1} & f\_{n-5} & f\_{n-4} & f\_{n-3} & f\_{n-2} \\ f\_{n-2} & K\_{n-2} & f\_{n-6} & f\_{n-5} & f\_{n-4} & f\_{n-3} \\ \\ f\_{n-3} & K\_{n-3} & f\_{n-7} & f\_{n-6} & f\_{n-5} & f\_{n-4} \\ f\_{n-4} & K\_{n-4} & f\_{n-8} & f\_{n-7} & f\_{n-6} & f\_{n-8} \end{vmatrix}$$

$$
\Delta^n = \begin{vmatrix}
\begin{array}{cccccccc}
 & f\_{n+1} & f\_{n-4} & f\_{n-3} & f\_{n-2} & f\_{n-1} & f\_n \\
 & f\_n & f\_{n-5} & f\_{n-4} & f\_{n-3} & f\_{n-2} & f\_{n-1} \\
 & f\_{n-1} & f\_{n-6} & f\_{n-5} & f\_{n-4} & f\_{n-3} & f\_{n-2} \\
 & f\_{n-2} & f\_{n-7} & f\_{n-6} & f\_{n-5} & f\_{n-4} & f\_{n-3} \\
 & & f\_{n-3} & f\_{n-8} & f\_{n-7} & f\_{n-6} & f\_{n-5} & f\_{n-4} \\
 & & f\_{n-4} & f\_{n-8} & f\_{n-8} & f\_{n-7} & f\_{n-6} & f\_{n-5}
\end{array}
$$

$$
\Delta^n = \begin{vmatrix}
\begin{array}{ccccccccc}
 & J\_{n+1} & J\_n & J\_{n-1} & J\_{n-2} & J\_{n-3} & J\_{n-4} \\
 & J\_n & J\_{n-1} & J\_{n-2} & J\_{n-3} & J\_{n-4} & J\_{n-5} \\
 \\
\end{array} \\
\Delta^n = \begin{vmatrix}
 & J\_{n-1} & J\_{n-2} & J\_{n-3} & J\_{n-4} & J\_{n-5} & J\_{n-6} \\
 & J\_{n-2} & J\_{n-3} & J\_{n-4} & J\_{n-5} & J\_{n-6} & J\_{n-7} \\
 \\
J\_{n-3} & J\_{n-4} & J\_{n-5} & J\_{n-6} & J\_{n-7} & J\_{n-8} \\
 \\
\end{vmatrix}
\end{vmatrix}
$$

Putting, n-9 = m or n = m + 9 and substituting all the Δ's, we obtain,

$$(-\mathbf{1})^{m+9} = \begin{vmatrix} f\_{m+10} & f\_{m+9} & f\_{m+8} & f\_{m+7} & f\_{m+6} & f\_{m+5} \\ \\ f\_{m+9} & f\_{m+8} & f\_{m+7} & f\_{m+6} & f\_{m+5} & f\_{m+4} \\ f\_{m+8} & f\_{m+7} & f\_{m+6} & f\_{m+5} & f\_{m+4} & f\_{m+3} \\ f\_{m+7} & f\_{m+6} & f\_{m+5} & f\_{m+4} & f\_{m+3} & f\_{m+2} \\ \\ f\_{m+6} & f\_{m+5} & f\_{m+4} & f\_{m+3} & f\_{m+2} & f\_{m+1} \\ f\_{m+5} & f\_{m+4} & f\_{m+3} & f\_{m+2} & f\_{m+1} & f\_{m} \end{vmatrix}$$

Rearranging the determinant and replacing m with n we get the required result (28)

#### **2.3 Generating matrix {Cn}**

In this section, we will obtain some identities with the help of generating matrix, we consider the matrix,

$$[T] = \begin{vmatrix} \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} \\ \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{0} \end{vmatrix} \tag{29}$$

By mathematical induction, we can show that:

$$\begin{bmatrix} \begin{vmatrix} J\_{n+1} & K\_{n+1} & L\_{n+1} & M\_{n+1} & N\_{n-1} & J\_n \\ \hline \\ J\_n & K\_n & L\_n & M\_n & N\_{n-2} & J\_{n-1} \\ \hline \\ J\_{n-1} & K\_{n-1} & L\_{n-1} & M\_{n-1} & N\_{n-3} & J\_{n-2} \\ J\_{n-2} & K\_{n-2} & L\_{n-2} & M\_{n-2} & N\_{n-4} & J\_{n-3} \\ \hline \\ J\_{n-3} & K\_{n-3} & L\_{n-3} & M\_{n-3} & N\_{n-5} & J\_{n-4} \\ \hline \\ J\_{n-4} & K\_{n-4} & L\_{n-4} & M\_{n-4} & N\_{n-6} & J\_{n-5} \\ \end{vmatrix} \tag{30}$$

where *n* ≥5 and

*Polynomials Related to Generalized Fibonacci Sequence DOI: http://dx.doi.org/10.5772/intechopen.110481*

$$[\mathbf{C}\_n, \mathbf{C}\_{n-1}, \mathbf{C}\_{n-2}, \mathbf{C}\_{n-3}, \mathbf{C}\_{n-4}, \mathbf{C}\_{n-5}] = [T]^n [\mathbf{C}\_5, \mathbf{C}\_4, \mathbf{C}\_3, \mathbf{C}\_2, \mathbf{C}\_1, \mathbf{C}\_0], n \ge 5 \tag{31}$$

On using (30) and (31), we get:

*Cn*þ*<sup>P</sup> Cn*þ*P*�<sup>1</sup> *Cn*þ*P*�<sup>2</sup> *Cn*þ*P*�<sup>3</sup> *Cn*þ*P*�<sup>4</sup> *Cn*þ*P*�<sup>5</sup> ¼ *Jn*þ<sup>1</sup> *Kn*þ<sup>1</sup> *Ln*þ<sup>1</sup> *Jn Kn Ln Jn*�<sup>1</sup> *Kn*�<sup>1</sup> *Ln*�<sup>1</sup> *Mn*þ<sup>1</sup> *Nn*�<sup>1</sup> *Jn Mn Nn*�<sup>2</sup> *Jn*�<sup>1</sup> *Mn*�<sup>1</sup> *Nn*�<sup>3</sup> *Jn*�<sup>2</sup> *Jn*�<sup>2</sup> *Kn*�<sup>2</sup> *Ln*�<sup>2</sup> *Jn*�<sup>3</sup> *Kn*�<sup>3</sup> *Ln*�<sup>3</sup> *Jn*�<sup>4</sup> *Kn*�<sup>4</sup> *Ln*�<sup>4</sup> *Mn*�<sup>2</sup> *Nn*�<sup>4</sup> *Jn*�<sup>3</sup> *Mn*�<sup>3</sup> *Nn*�<sup>5</sup> *Jn*�<sup>4</sup> *Mn*�<sup>4</sup> *Nn*�<sup>6</sup> *Jn*�<sup>5</sup> *Cn Cn*�<sup>1</sup> *Cn*�<sup>2</sup> *Cn*�<sup>3</sup> *Cn*�<sup>4</sup> *Cn*�<sup>5</sup> 

From this we obtain:

$$C\_{n+P} = f\_{P+1}D\_n + K\_{P+1}D\_{n-1} + L\_{P+1}D\_{n-2} + M\_{P+1}D\_{n-3} + N\_{P+1}D\_{n-4} + f\_{P+1}D\_{n-5} \tag{32}$$

Let us now consider the matrix [*W*], which is the transpose of the matrix [T] in,

$$[W] = [T] = \begin{vmatrix} \mathbf{1} & \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{1} & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{0} \\ \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{0} \\ \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{1} \\ \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \end{vmatrix}$$

It can be shown that the sequence

$$\{C\_4, P\_5, Q\_5, R\_5, S\_5, C\_5, \dots, C\_{n-1}, P\_n, Q\_n, \dots, R\_n, S\_n, C\_n\} \tag{33}$$

is generated by matrix [*W*]

$$[\mathbf{C}\_n, P\_n, Q\_n, R\_n, \mathbf{S}\_n, \mathbf{C}\_{n-1}] = [W]^{n-5} [\mathbf{C}\_5, P\_5, Q\_5, R\_5, \mathbf{S}\_5, \mathbf{C}\_4], n \ge 5 \tag{34}$$

On using (33) and (34), we get

$$\begin{aligned} \begin{bmatrix} C\_{n+P}, P\_{n+P}, Q\_{n+P}, R\_{n+P}, S\_{n+P}, S\_{n+P}, C\_{n+P} \end{bmatrix} \\ &= [W]^{P} [\mathbf{C}\_{n}, P\_{n}, Q\_{n}, R\_{n}, S\_{n}, C\_{n-1}] \\ & \begin{vmatrix} I\_{P+1} & I\_{P} & I\_{P-1} & J\_{P-2} & J\_{P-3} & J\_{P-4} & \begin{vmatrix} C\_{n} \\ I\_{P} \end{vmatrix} \\\\ K\_{P+1} & K\_{P} & K\_{P-1} & K\_{P-2} & K\_{P-3} & K\_{P-4} & \begin{vmatrix} C\_{n} \\ I\_{P} \end{vmatrix} \\ &= \begin{vmatrix} L\_{P+1} & L\_{P} & L\_{P-1} & L\_{P-2} & L\_{P-3} & L\_{P-4} \\ M\_{P+1} & M\_{P} & M\_{P-1} & M\_{P-2} & M\_{P-3} & M\_{P-4} & \begin{vmatrix} C\_{n} \\ R\_{n} \end{vmatrix} \\\\ N\_{P+1} & N\_{P} & N\_{P-1} & N\_{P-2} & N\_{P-3} & N\_{P-4} & \begin{vmatrix} C\_{n} \\ S\_{n} \end{vmatrix} \\\\ I\_{P} & I\_{P-1} & I\_{P-2} & I\_{P-3} & I\_{P-4} & J\_{P-5} \end{vmatrix} \end{aligned}$$

**115**

$$\mathbf{C}\_{n+P} = f\_{P+1}\mathbf{C}\_{n} + f\_{P}P\_{n} + f\_{P-1}\mathbf{Q}\_{n} + f\_{P-2}\mathbf{R}\_{n} + f\_{P-3}\mathbf{S}\_{n} + f\_{P-4}\mathbf{C}\_{n-1}$$

$$P\_{n+P} = K\_{P+1}\mathbf{C}\_{n} + K\_{P}P\_{n} + K\_{P-1}\mathbf{Q}\_{n} + K\_{P-2}\mathbf{R}\_{n} + K\_{P-3}\mathbf{S}\_{n} + K\_{P-4}\mathbf{C}\_{n-1}$$

$$\mathbf{Q}\_{n+P} = L\_{P+1}\mathbf{C}\_{n} + L\_{P}P\_{n} + L\_{P-1}\mathbf{Q}\_{n} + L\_{P-2}\mathbf{R}\_{n} + L\_{P-3}\mathbf{S}\_{n} + L\_{P-4}\mathbf{C}\_{n-1}$$

$$R\_{n+P} = M\_{P+1}\mathbf{C}\_{n} + M\_{P}P\_{n} + M\_{P-1}\mathbf{Q}\_{n} + M\_{P-2}\mathbf{R}\_{n} + M\_{P-3}\mathbf{S}\_{n} + M\_{P-4}\mathbf{C}\_{n-1}$$

$$\mathbf{S}\_{n+P} = N\_{P+1}\mathbf{C}\_{n} + N\_{P}P\_{n} + N\_{P-1}\mathbf{Q}\_{n} + N\_{P-2}\mathbf{R}\_{n} + N\_{P-3}\mathbf{S}\_{n} + N\_{P-4}\mathbf{C}\_{n-1}$$

Application:

We can introduce generalized Fibonacci n-step polynomials. Based on generalized Fibonacci n-step polynomials, we can define a new class of square matrix of order n and we can state a new coding theory called generalized Fibonacci n-step theory.

### **3. Discussion**

Mathematics has enormous potential for solving the various problems of daily life. The Fibonacci polynomials are a polynomial sequence that can be considered as generalization sequences worked upon by many mathematicians earlier like as Atanassov [11], Harne & Parihar [4], and Georgiev and Atanassov [8] in accordance with our findings. The chapter has wider acceptance for the fruitful study of various case studies as illustrated in the current citation, which is well supported by the earlier studies too.

### **4. Conclusions**

There are many known identities for the Fibonacci recursion relation. We define the sequence {*Cn*} and its four comparison sequences {*Pn*}, {*Qn*}, {*Rn*}, and {*Sn*}. We drive linear sum properties of comparison sequence. We also derive generating matrix for the sequence {*Cn*}.

*Polynomials Related to Generalized Fibonacci Sequence DOI: http://dx.doi.org/10.5772/intechopen.110481*

### **Author details**

Manjeet Singh Teeth<sup>1</sup> \* and Sanjay Harne<sup>2</sup>

1 Department of Mathematics, Christian Eminent College Affiliated to Devi Ahilya University, Indore, India

2 Department of Mathematics, Mata Jijabai Government P.G. Girls College, Affiliated to Devi Ahilya University, Indore, India

\*Address all correspondence to: manjeetsinghteeth@gmail.com

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Vorobyov NN. The Fibonacci Numbers. Boston Pergamon: D.C. Health Company; 1963

[2] Waddill ME, Lovis S. Another generalized fibonacci sequence. The Fibonacci Quarterly. 1967;**5**(3):209-222

[3] Jaiswal DV. On a generalized fibonacci sequence, Labdev. Journal of Science and Technology. 1969;**7**(2):67-71

[4] Harne S, Parihar CL. Generalized fibonacci sequence. Ganit Sandesh (India). 1994;**8**(2):75-80

[5] Teeth MS, Harne S. Polynomial related to generalized fibonacci sequence. Journal of Ultra Scientist of Physical Sciences. 2022;**34**(2):16-24

[6] Georghiou C. On some second order linear recurrence. The Fibonacci Quarterly. 1989;**27**(2):10-15

[7] Georgiev P, Atanassov KT. On one generalization of the Fibonacci sequence, Part III. Some relations with fixed initial values. Bulletin of Number Theory and Related Topics. 1992;**16**:83-92

[8] Georgiev P, Atanassov KT. On one generalization of the Fibonacci sequence, Part II. Some relations with arbitrary initial values. Bulletin of Number Theory and Related Topics. 1995;**16**:75-82

[9] Georgiev P, Atanassov KT. On one generalization of the Fibonacci sequence. Part V. Some examples. Notes on Number Theory and Discrete Mathematics. 1996a;**2**(4):8-13

[10] Georgiev P, Atanassov KT. On one generalization of the Fibonacci sequence, Part VI: Some other examples. Notes on Number Theory and Discrete Mathematics. 1996b;**2**(4):14-17

[11] Atanassov KT. An arithmetic function and some of its applications. Bulletin of Number Theory and Related Topics. 1985;**9**(1):18-27

[12] Atanassov KT, Atanassov L, Sasselov D. A new perspective to the generalization of the Fibonacci sequence. The Fibonacci Quarterly. 1985;**23**(1): 21-28

[13] Walton JE, Horadam AF. Some aspects of generalized fibonacci numbers. The Fibonacci Quarterly. 1974a;**12**(3):241-250

[14] Walton JE, Horadam AF. Some further identities for the generalized Fibonacci sequence {Hn}. The Fibonacci Quarterly. 1974b;**12**(3):272-280

### **Chapter 8**
