**4. Findings and discussion**

This section presents an overview of the challenges posed by banks in deploying AI in their daily front, middle and back office operations prescribed from the CAMELS perspective (see **Table 1** in the Appendix).

**291**

[28, 64, 66–68].

fed itself [10, 12, 64, 66–68, 70–72].

*Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2*

Bank capital acts as a core determinant for bank's survival. Capital absorbs losses during adversity and insufficient capital holdings can cause banks to collapse. AI with its limitless abilities and capabilities helps banks to hold robust capital hold-

Banks have to hold sufficient liquidity funding to ensure that it is able to meet unforeseen deposit outflows. Banks that struggle to meet its daily liquidity needs will eventually fail [3]. Central banks working on larger scales overseeing the workings of the market use AI to sort large number of bank notes and detect liquidity

AI's ability to detect or uncover crisis depends on the quantity and quality of data provided and used to train the algorithms. As such if the dataset lacks important conditions such as economic crashes, as normal periods exist more than crisis periods, limited crisis data could reduce AI's predictive abilities and the output will have limited use in measuring or projecting future risk under stress [7, 32]. As such, will have little value for bank's setting their minimum prescribed capital/liquidity holding (Basel accords) to remain solvent while lending through recessions [27]. Banks have little choice but to rely on theory of distribution of losses and parametric statistical structure to link normal times data to determine large losses that causes instability. Yet, a more accurate prediction will yield from data of distribu-

Asset quality is measured by the level of credit risk contained in bank's assets [62]. Therefore, a bank that can detect, measure, monitor and regulate credit risks will hold higher quality assets [63]. The GFC showcased that credit risk is the most challenging risk to manage and control as it not only absorbs profits but exposes banks to failures as well. AI helps banks to clearly assess and evaluate customers' risk, eliminating ambiguity, biasness while improving loan processes and. Banks are accountable for each decision that they make. As such employ verification and checks at several levels to weed out incorrect or weak decisions. As such, Loan officers should be able to provide a logical explanation on what grounds a loan has been accepted or rejected to their superiors, compliance officers, auditors, regulators and customers [5, 7, 10, 12, 64, 65]. The working logic of AI decision has to be traceable backwards. Customers need to understand the reasons why their loan application has been rejected or why AI has recommended a particular product before acting on it. Keeping customers in the dark without proper justification will cut short the chances of them determining the real cause behind the rejection, finding solutions to their problems and improving their circumstances or proving an identity theft if it happened to them. In short, AI adverse decision will have a permanent detrimental effect on someone's future

Transparency is also important to fully trust the system through validating the decision made by AI, by not only detecting anomalies in the decision process such as biasness, mistakes, manipulations of data, deficiencies, compliance to rules i.e. GDPR, cybersecurity crimes linked to work processes such as dataset poisoning, internal network manipulation, and side-channel attacks [69] but also to detect clearly and precisely at which step the anomalies occurred and what information AI

Although AI can assess customers from various angles namely with non-traditional data such as customers connection, internet searches, network diversity, etc.,

*DOI: http://dx.doi.org/10.5772/intechopen.95806*

**4.1 Capital and liquidity**

ings through stress testing.

tion of losses itself [27].

problems.

**4.2 Asset**

*Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2 DOI: http://dx.doi.org/10.5772/intechopen.95806*

#### **4.1 Capital and liquidity**

*Operations Management - Emerging Trend in the Digital Era*

Intelligence (AI), CAMELS. Only articles that were available in full text, published in scholarly, peer reviewed journal were chosen to be closely examined. The search was also conducted using the backward and forward approach where reference list

*Taxonomy of challenges posed by AI on Bank Soundness - A classification based on the determinants Bank* 

This section presents an overview of the challenges posed by banks in deploying AI in their daily front, middle and back office operations prescribed from the

of articles was utilized to find further research papers.

CAMELS perspective (see **Table 1** in the Appendix).

**4. Findings and discussion**

**290**

**Figure 1.**

*Soundness of CAMELS.*

Bank capital acts as a core determinant for bank's survival. Capital absorbs losses during adversity and insufficient capital holdings can cause banks to collapse. AI with its limitless abilities and capabilities helps banks to hold robust capital holdings through stress testing.

Banks have to hold sufficient liquidity funding to ensure that it is able to meet unforeseen deposit outflows. Banks that struggle to meet its daily liquidity needs will eventually fail [3]. Central banks working on larger scales overseeing the workings of the market use AI to sort large number of bank notes and detect liquidity problems.

AI's ability to detect or uncover crisis depends on the quantity and quality of data provided and used to train the algorithms. As such if the dataset lacks important conditions such as economic crashes, as normal periods exist more than crisis periods, limited crisis data could reduce AI's predictive abilities and the output will have limited use in measuring or projecting future risk under stress [7, 32]. As such, will have little value for bank's setting their minimum prescribed capital/liquidity holding (Basel accords) to remain solvent while lending through recessions [27]. Banks have little choice but to rely on theory of distribution of losses and parametric statistical structure to link normal times data to determine large losses that causes instability. Yet, a more accurate prediction will yield from data of distribution of losses itself [27].

#### **4.2 Asset**

Asset quality is measured by the level of credit risk contained in bank's assets [62]. Therefore, a bank that can detect, measure, monitor and regulate credit risks will hold higher quality assets [63]. The GFC showcased that credit risk is the most challenging risk to manage and control as it not only absorbs profits but exposes banks to failures as well. AI helps banks to clearly assess and evaluate customers' risk, eliminating ambiguity, biasness while improving loan processes and.

Banks are accountable for each decision that they make. As such employ verification and checks at several levels to weed out incorrect or weak decisions. As such, Loan officers should be able to provide a logical explanation on what grounds a loan has been accepted or rejected to their superiors, compliance officers, auditors, regulators and customers [5, 7, 10, 12, 64, 65]. The working logic of AI decision has to be traceable backwards. Customers need to understand the reasons why their loan application has been rejected or why AI has recommended a particular product before acting on it. Keeping customers in the dark without proper justification will cut short the chances of them determining the real cause behind the rejection, finding solutions to their problems and improving their circumstances or proving an identity theft if it happened to them. In short, AI adverse decision will have a permanent detrimental effect on someone's future [28, 64, 66–68].

Transparency is also important to fully trust the system through validating the decision made by AI, by not only detecting anomalies in the decision process such as biasness, mistakes, manipulations of data, deficiencies, compliance to rules i.e. GDPR, cybersecurity crimes linked to work processes such as dataset poisoning, internal network manipulation, and side-channel attacks [69] but also to detect clearly and precisely at which step the anomalies occurred and what information AI fed itself [10, 12, 64, 66–68, 70–72].

Although AI can assess customers from various angles namely with non-traditional data such as customers connection, internet searches, network diversity, etc., yet how reliable is this information to make an informed decision about a person's repayment ability, thus future. Does the credit score of a person increase if they socialise with those who are creditworthy? Borrowers may also be judged based on how they behave online or their dishonesty in disclosing financial data, forming biases and being judged unfairly [73]. Also, are customers aware that non-traditional data is used in the evaluation process to assess their loan repayment ability? [6].

AI that are trained based on supervised learning where both inputs and output are fed into the system have zero chance of biasness unless the data fed itself is biased. Data used to train ML algorithm must be representative of a wide range of customers that will apply for loans representing namely a whole population [27, 72]. If a population is underrepresented or there are rare cases such as women, race, ethnicity, marital status, zero credit history and this information is used to train AI, AI will deliver biased results if data is highly correlated in these categories [7, 28, 72–74].

In unsupervised learning, AI trains itself to make independent decisions. As such, based on what AI trains itself with, decisions can be bias. In reinforced learning AI takes uses its own initiative to combine various decision to make an ultimate decision where biasness can form as well. In checking creditworthiness AI can discriminate based on gender if more men are in professions or earning higher salaries, race if more discount stores are located near ethnic minorities, spelling mistakes in internet searches etc. Statistics reveal, algorithms accept white applicants and reject black applicants evident from the gradual reduction in black applicants' loan approval in banks [64].

According to Janssen algorithms can systematically introduce inadvertent bias, reinforce historical discrimination, favor a political orientation or reinforce undesired practices [75]. Standard affordability rules such as defaults, loan-to-value, and loanto-income may not be applicable to all groups of borrowers [76] causing low-income borrowers to be marginalised. Looking from a different perspective, the contribution of one's data could contribute to a whole set of minority, race, gender, marital status or society to be judged in a certain way, forming biases, causing more harm than intended. For example, algorithms picking up 20 black female who are constantly delinquent on their loans as a representative of the whole black female population. AI could also link financially vulnerable customers to mental health issues [12]. Banks could utilize this information to turn down loan applications causing more harm to the society than intended [12]. Yet, to train AI systems to replicate human decisionmaking skills is a challenge. As it is difficult to transform various algorithmic concepts into training data to solve every problem for a range of lending products [10, 77].

#### **4.3 Management**

Banks rely heavily on management to not only generate earnings and increase profit margins [3] but also to keep banks alive [78]. AI helps banks to be more efficient, effective, effectual and efficacious.

The legal profession requires predictability in its approach i.e. contracts are written in a way knowing how it will be executed. As such, the legal system offers a predictable environment where customers can improve their lives [64]. Therefore, AI needs to be predictable to customers.

The GFC is the outcome of human greed, manipulation and corruption. As such, AI algorithms need to be robust against exploitation [64]. Discontented employees or external foes may learn the inner workings of the AI model to easily corrupt algorithms or use AI application in malfeasant ways [28]. This will strike of a worst catastrophe than the GFC, as the involvement of AI increases complexity and opaqueness of the financial systems making it is difficult to configure a solution.

**293**

*Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2*

write the content the chatbots or the algorithm provider be liable? [5, 7].

into the system. Only if customers feel comfortable to share [12, 32].

AI's critical tools and infrastructure. As such, increasing operational risk [7].

As AI systems are interconnected, hackers or malicious software can manipulate bank's data by hacking client's financial details, creating false identities, flooding systems with fabricated data resulting in misclassification or bad clusters causing incorrect decisions facing consumer backlash and regulatory repercussions [28, 72]. Algorithms are constantly looking to improve predictive power. As such are in the constant look out for correlations producing spurious relationships which

Literature have pointed out the potential for AIs to act on biased data [81–92]. scientists have realised that ML can discriminate customers based on race and gender. One such example is the 'white guy' syndrome where men are picked over women. Input data is directly linked to the outcome. As every individual have their own biases, norms and ethics it is difficult to establish that biases will not exist even after AI has gone through training data [84, 93]. Also to perfect the existing biases under the Fair Lending Act, and to improve processes and innovation, more data from people with disability, colour, age, gender and creed could be incorporated

As developing and operating AI requires extensive resources and big data, only large banks can be players in this field. This encourages concentration affecting healthy competition in the market [7]. Banks have to rely heavily on technology companies for

As there are only a few players in the market, operational risk could easily feed into systemic risk. On top of that, the widespread use of AI in similar functions such as provision of credit or trading of financial assets and the uniformity in data, training and methodology employed to develop the algorithms could spark off herding

Banks that are extensively working with AI need staff that have expertise not only the field of finance but also have formal training in computer science, cybersecurity, cryptography, decision theory, machine learning, formal verification, computer forensics, steganography, ethics, mathematics, network security, psychology and other relevant fields. The challenge would be to find sufficient number

AI in the form of robo-advisory services incur high development, marketing as well as advertising costs. As such a single client acquisition costs ranges between \$300–\$1,000 with clients at the lower end only generating \$100 in annual revenues [9, 94]. Robo-advisors' slim operating margin and low average account size would eat up the profits garnered quickly taking banks a decade or more to cover the \$10 to

Some studies have pointed out that ML is only able to act on the primary stages of decision making such as data processing and forming predictions. However, at higher levels of judgement, action and task, requires special skills such as empathy,

lateral thinking as well risk evaluations where AI is unable to muster [6].

When an AI system fails at its assigned task, it is difficult to pin down who is to take the blame for its actions. As the AI ecosystem comprises of wide range of stakeholders from the philosopher, the AI researcher, the data scientist, the data provider, the developer, the library author, the hardware manufacturer, the OS provider, programmers, etc. Each staff has established procedures to part with AI and their responsibilities have been distributed widely amongst them. As such, when a catastrophe strikes, it would be difficult to assign liability and could be a perfect cover for mistakes, manipulation and exploitation [64, 72, 79]. In the pursuit of accumulating big data, banks could cross the boundaries to incorporate customers private information. As such, when any loss results from the use of AI should the scientists who work to tune the experience to the needs of consumers, employees

*DOI: http://dx.doi.org/10.5772/intechopen.95806*

eventually leads to bias conclusions [80].

and procyclical behaviour [7].

\$100 million in marketing costs [9, 95].

of staff to fit this role.

#### *Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2 DOI: http://dx.doi.org/10.5772/intechopen.95806*

When an AI system fails at its assigned task, it is difficult to pin down who is to take the blame for its actions. As the AI ecosystem comprises of wide range of stakeholders from the philosopher, the AI researcher, the data scientist, the data provider, the developer, the library author, the hardware manufacturer, the OS provider, programmers, etc. Each staff has established procedures to part with AI and their responsibilities have been distributed widely amongst them. As such, when a catastrophe strikes, it would be difficult to assign liability and could be a perfect cover for mistakes, manipulation and exploitation [64, 72, 79]. In the pursuit of accumulating big data, banks could cross the boundaries to incorporate customers private information. As such, when any loss results from the use of AI should the scientists who work to tune the experience to the needs of consumers, employees write the content the chatbots or the algorithm provider be liable? [5, 7].

As AI systems are interconnected, hackers or malicious software can manipulate bank's data by hacking client's financial details, creating false identities, flooding systems with fabricated data resulting in misclassification or bad clusters causing incorrect decisions facing consumer backlash and regulatory repercussions [28, 72].

Algorithms are constantly looking to improve predictive power. As such are in the constant look out for correlations producing spurious relationships which eventually leads to bias conclusions [80].

Literature have pointed out the potential for AIs to act on biased data [81–92]. scientists have realised that ML can discriminate customers based on race and gender. One such example is the 'white guy' syndrome where men are picked over women. Input data is directly linked to the outcome. As every individual have their own biases, norms and ethics it is difficult to establish that biases will not exist even after AI has gone through training data [84, 93]. Also to perfect the existing biases under the Fair Lending Act, and to improve processes and innovation, more data from people with disability, colour, age, gender and creed could be incorporated into the system. Only if customers feel comfortable to share [12, 32].

As developing and operating AI requires extensive resources and big data, only large banks can be players in this field. This encourages concentration affecting healthy competition in the market [7]. Banks have to rely heavily on technology companies for AI's critical tools and infrastructure. As such, increasing operational risk [7].

As there are only a few players in the market, operational risk could easily feed into systemic risk. On top of that, the widespread use of AI in similar functions such as provision of credit or trading of financial assets and the uniformity in data, training and methodology employed to develop the algorithms could spark off herding and procyclical behaviour [7].

Banks that are extensively working with AI need staff that have expertise not only the field of finance but also have formal training in computer science, cybersecurity, cryptography, decision theory, machine learning, formal verification, computer forensics, steganography, ethics, mathematics, network security, psychology and other relevant fields. The challenge would be to find sufficient number of staff to fit this role.

AI in the form of robo-advisory services incur high development, marketing as well as advertising costs. As such a single client acquisition costs ranges between \$300–\$1,000 with clients at the lower end only generating \$100 in annual revenues [9, 94]. Robo-advisors' slim operating margin and low average account size would eat up the profits garnered quickly taking banks a decade or more to cover the \$10 to \$100 million in marketing costs [9, 95].

Some studies have pointed out that ML is only able to act on the primary stages of decision making such as data processing and forming predictions. However, at higher levels of judgement, action and task, requires special skills such as empathy, lateral thinking as well risk evaluations where AI is unable to muster [6].

*Operations Management - Emerging Trend in the Digital Era*

categories [7, 28, 72–74].

approval in banks [64].

**4.3 Management**

efficient, effective, effectual and efficacious.

AI needs to be predictable to customers.

yet how reliable is this information to make an informed decision about a person's repayment ability, thus future. Does the credit score of a person increase if they socialise with those who are creditworthy? Borrowers may also be judged based on how they behave online or their dishonesty in disclosing financial data, forming biases and being judged unfairly [73]. Also, are customers aware that non-traditional data is used in the evaluation process to assess their loan repayment ability? [6]. AI that are trained based on supervised learning where both inputs and output are fed into the system have zero chance of biasness unless the data fed itself is biased. Data used to train ML algorithm must be representative of a wide range of customers that will apply for loans representing namely a whole population [27, 72]. If a population is underrepresented or there are rare cases such as women, race, ethnicity, marital status, zero credit history and this information is used to train AI, AI will deliver biased results if data is highly correlated in these

In unsupervised learning, AI trains itself to make independent decisions. As such, based on what AI trains itself with, decisions can be bias. In reinforced learning AI takes uses its own initiative to combine various decision to make an ultimate decision where biasness can form as well. In checking creditworthiness AI can discriminate based on gender if more men are in professions or earning higher salaries, race if more discount stores are located near ethnic minorities, spelling mistakes in internet searches etc. Statistics reveal, algorithms accept white applicants and reject black applicants evident from the gradual reduction in black applicants' loan

According to Janssen algorithms can systematically introduce inadvertent bias, reinforce historical discrimination, favor a political orientation or reinforce undesired practices [75]. Standard affordability rules such as defaults, loan-to-value, and loanto-income may not be applicable to all groups of borrowers [76] causing low-income borrowers to be marginalised. Looking from a different perspective, the contribution of one's data could contribute to a whole set of minority, race, gender, marital status or society to be judged in a certain way, forming biases, causing more harm than intended. For example, algorithms picking up 20 black female who are constantly delinquent on their loans as a representative of the whole black female population. AI could also link financially vulnerable customers to mental health issues [12]. Banks could utilize this information to turn down loan applications causing more harm to the society than intended [12]. Yet, to train AI systems to replicate human decisionmaking skills is a challenge. As it is difficult to transform various algorithmic concepts into training data to solve every problem for a range of lending products [10, 77].

Banks rely heavily on management to not only generate earnings and increase profit margins [3] but also to keep banks alive [78]. AI helps banks to be more

The legal profession requires predictability in its approach i.e. contracts are written in a way knowing how it will be executed. As such, the legal system offers a predictable environment where customers can improve their lives [64]. Therefore,

The GFC is the outcome of human greed, manipulation and corruption. As such, AI algorithms need to be robust against exploitation [64]. Discontented employees or external foes may learn the inner workings of the AI model to easily corrupt algorithms or use AI application in malfeasant ways [28]. This will strike of a worst catastrophe than the GFC, as the involvement of AI increases complexity and opaqueness of the financial systems making it is difficult to configure a solution.

**292**

Algorithmic trading through ML could also facilitate trading errors. A single failure in the AI system could lead to devasting catastrophes without a chance for recovery resulting in flash crashes [96, 97], causing "excess volatility or increase pro-cyclicality as a result of herding" [98]. Besides, major financial institutions returned compliance software that stopped detecting trading issues from excluded customer trades [28]. Developers prescribe that new intelligent features embedded into AIs could pose unexpected and unknown risks creating new points of vulnerability leaving loopholes for hackers to exploit [28, 32].

If humans make mistakes, manipulated the system they can be fired instantly. However, if AI makes mistakes or corrupts, customers will lose hope and trust in the bank and its systems [5]. Robo-advisors work with several parties namely clearing firms, custodians, affiliated brokers, and other companies in the industry to offer its services to customers. While Lewis suggest that robo-advisors resolve conflict of interest amongst the parties [99], Jung et al. suggest conflicts remains costing customers [100]. If company uses brokers for an example, this cost is transferred to customers increasing the price of service while the robo-advisor makes profit as the middleman [101]. In other scenarios, robo-advisors could receive fee for order flow in exchange for routing trades to a clearing firm or have an equal interest in securities that customers are looking into [101].

Scripting errors, lapses in data management, misjudgments in model-training data can compromise fairness, privacy, security, as well as compliance [28]. As the volume of data being sorted, linked, ingested is large, further complicated with unstructured data, mistakes such as revealing sensitive information could take place. i.e. client's name might be redacted from the section that is used by an AI but present in the stock broker's notes section of the record thus breaching European Union's General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) [28].

#### **4.4 Earnings**

Banks that manage their expenses well while fully utilising their assets to generate constant revenue streams are most likely to be sound [3]. AI enables banks to offer unique selling points in products that increases customer satisfaction, boosting sales and revenue [6].

Studies have recorded chatbot controversy and backlash [5, 6]. AI chatbot Microsoft's Tay, tweeted racist, sexist, xenophobic, and prejudiced opinions through learning them through tweets it read and interaction with younger demographics on Twitter upsetting customers [7, 102–104].

To increase market share and competitive positions, improving the predictive power of algorithms and ensuring AIs are trained properly to avoid bias decisions, banks need a large set of quality and diverse data. In the pursuit and pressure to achieve this goal, banks might share customer's private data without their consent when customers trusted the bank to keep it confidential [7, 9, 12]. Privacy is important to not only for customers but also banks to allow banks to remain in its competitive position [27].

Training AIs to exclude certain segments of customers in sales could also lead to discrimination and biasness [28]. AI could also weave zip codes and income of individuals to create target offering causing discrimination against other classes and groups of people [28]. Robo-advisors can provide incorrect risk evaluation if it is not equipped with all aspects of an investor's financial condition to accurately access the overall risk. When customised questions are unable to capture unique life experiences such as receiving large inheritance customers are better off with human advisors [9].

Customers are more likely to rely on human advisers than chatbots and roboadvisors to assist when it comes to more personal and sensitive matters. One example

**295**

**5. Conclusion**

*Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2*

is when large sums of money is involved either through wealthy customers or due to death and illness. Another is when there is market volatility. Customers are less inclined to trust new technologies and would prefer humans to handle such transactions for accountability purposes [6, 9, 105, 106]. Customers are also more confident to gain insight from human advisors when it comes to complex financial products such as derivatives, discussing complicated matters or making complains [6]. AI lacks emotion quotient. As such is unable to connect, understand or react to human at deeper level to comprehend their emotion and to empathise, rejoice or sympathize with them [5, 6]. As such, some prefer front-desk receptionist to a

Although the equality act, that oversees the violation against race and gender acknowledges inequality when "a person" treats "another person" favourably. It does not recognise discrimination by AI's but it recognises discrimination by other "non-humans" such as government agency or regulator. As such, AI cannot be taken

Banks may not disclose their use of AI to customers to either benefit the bank or to avoid "fear factor of AI" amongst customers. Banks have to be transparent with their customers revealing if they are working with AI or human advisors. Although Swedbank and Société Générale agree it is best to be honest with customers others may not agree as it disadvantages the banks in many ways. As such AI are being trained to offer seamless interaction through training to very closely mimic humans. [6].

Banks are subject to market risks (i.e. interest rate risk, foreign exchange risk, price risk etc) that can have adverse effects on bank's earnings and capital. AI provides solutions to real world problems [20], through real time, enabling banks to keep up, adapt and respond to constant and dynamic changes in the environ-

Unsupervised ML techniques are the only ones that can be used to detect frauds as it is able to identify unusual transaction characteristics to then investigate and test further to prove the authenticity of the transaction [27, 32]. Besides, unsupervised ML can also be used to closely monitor traders' behavior enabling auditing [107]. Yet, as unsupervised ML is linked to blackbox decision making, it is difficult

ML in the form of decision tree and logistic regression models are interpretable but lack accuracies [10]. AI in the form of Deep Neural Networks (DNNs) and ensemble models such as random-forest, XGboost and Adaboost have strong higher predictive power and accuracy as it works with multiple layers of hundreds and thousands of parameters and neurons while applying nested non-linear structure which makes them opaque and a complex blackbox [10, 27, 32, 71]. Various stakeholders have labelled the effort of allowing black boxes to make decisions as irresponsible as decisions created by these blackboxes are not justifiable, not interpretable, trackable,

Investment in AI is one of the most essential and a core element in bank survival. Therefore, it is vital for banks to continue to deploy AI into their operations. Yet, AI suffers from a series of limitations that must be considered in assessing its use. Many studies have raised concerns about AI biasness, discrimination, violating privacy, manipulating political systems compromising national security,

legitimate, trustworthy as the decision lacks detailed explanations [71].

*DOI: http://dx.doi.org/10.5772/intechopen.95806*

to court [6].

**4.5 Senstivity**

chatbot or an electronic menu that needs to be navigated.

ment. Thus, improving bank stability and soundness.

to point out if the decision was made fairly.

#### *Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2 DOI: http://dx.doi.org/10.5772/intechopen.95806*

is when large sums of money is involved either through wealthy customers or due to death and illness. Another is when there is market volatility. Customers are less inclined to trust new technologies and would prefer humans to handle such transactions for accountability purposes [6, 9, 105, 106]. Customers are also more confident to gain insight from human advisors when it comes to complex financial products such as derivatives, discussing complicated matters or making complains [6].

AI lacks emotion quotient. As such is unable to connect, understand or react to human at deeper level to comprehend their emotion and to empathise, rejoice or sympathize with them [5, 6]. As such, some prefer front-desk receptionist to a chatbot or an electronic menu that needs to be navigated.

Although the equality act, that oversees the violation against race and gender acknowledges inequality when "a person" treats "another person" favourably. It does not recognise discrimination by AI's but it recognises discrimination by other "non-humans" such as government agency or regulator. As such, AI cannot be taken to court [6].

Banks may not disclose their use of AI to customers to either benefit the bank or to avoid "fear factor of AI" amongst customers. Banks have to be transparent with their customers revealing if they are working with AI or human advisors. Although Swedbank and Société Générale agree it is best to be honest with customers others may not agree as it disadvantages the banks in many ways. As such AI are being trained to offer seamless interaction through training to very closely mimic humans. [6].

#### **4.5 Senstivity**

*Operations Management - Emerging Trend in the Digital Era*

ability leaving loopholes for hackers to exploit [28, 32].

ties that customers are looking into [101].

**4.4 Earnings**

ing sales and revenue [6].

competitive position [27].

Algorithmic trading through ML could also facilitate trading errors. A single failure in the AI system could lead to devasting catastrophes without a chance for recovery resulting in flash crashes [96, 97], causing "excess volatility or increase pro-cyclicality as a result of herding" [98]. Besides, major financial institutions returned compliance software that stopped detecting trading issues from excluded customer trades [28]. Developers prescribe that new intelligent features embedded into AIs could pose unexpected and unknown risks creating new points of vulner-

If humans make mistakes, manipulated the system they can be fired instantly. However, if AI makes mistakes or corrupts, customers will lose hope and trust in the bank and its systems [5]. Robo-advisors work with several parties namely clearing firms, custodians, affiliated brokers, and other companies in the industry to offer its services to customers. While Lewis suggest that robo-advisors resolve conflict of interest amongst the parties [99], Jung et al. suggest conflicts remains costing customers [100]. If company uses brokers for an example, this cost is transferred to customers increasing the price of service while the robo-advisor makes profit as the middleman [101]. In other scenarios, robo-advisors could receive fee for order flow in exchange for routing trades to a clearing firm or have an equal interest in securi-

Scripting errors, lapses in data management, misjudgments in model-training data can compromise fairness, privacy, security, as well as compliance [28]. As the volume of data being sorted, linked, ingested is large, further complicated with unstructured data, mistakes such as revealing sensitive information could take place. i.e. client's name might be redacted from the section that is used by an AI but present in the stock broker's notes section of the record thus breaching European Union's General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) [28].

Banks that manage their expenses well while fully utilising their assets to generate constant revenue streams are most likely to be sound [3]. AI enables banks to offer unique selling points in products that increases customer satisfaction, boost-

Studies have recorded chatbot controversy and backlash [5, 6]. AI chatbot Microsoft's Tay, tweeted racist, sexist, xenophobic, and prejudiced opinions through learning them through tweets it read and interaction with younger demo-

To increase market share and competitive positions, improving the predictive power of algorithms and ensuring AIs are trained properly to avoid bias decisions, banks need a large set of quality and diverse data. In the pursuit and pressure to achieve this goal, banks might share customer's private data without their consent when customers trusted the bank to keep it confidential [7, 9, 12]. Privacy is important to not only for customers but also banks to allow banks to remain in its

Training AIs to exclude certain segments of customers in sales could also lead to discrimination and biasness [28]. AI could also weave zip codes and income of individuals to create target offering causing discrimination against other classes and groups of people [28]. Robo-advisors can provide incorrect risk evaluation if it is not equipped with all aspects of an investor's financial condition to accurately access the overall risk. When customised questions are unable to capture unique life experiences such as receiving large inheritance customers are better off with human advisors [9]. Customers are more likely to rely on human advisers than chatbots and roboadvisors to assist when it comes to more personal and sensitive matters. One example

graphics on Twitter upsetting customers [7, 102–104].

**294**

Banks are subject to market risks (i.e. interest rate risk, foreign exchange risk, price risk etc) that can have adverse effects on bank's earnings and capital. AI provides solutions to real world problems [20], through real time, enabling banks to keep up, adapt and respond to constant and dynamic changes in the environment. Thus, improving bank stability and soundness.

Unsupervised ML techniques are the only ones that can be used to detect frauds as it is able to identify unusual transaction characteristics to then investigate and test further to prove the authenticity of the transaction [27, 32]. Besides, unsupervised ML can also be used to closely monitor traders' behavior enabling auditing [107]. Yet, as unsupervised ML is linked to blackbox decision making, it is difficult to point out if the decision was made fairly.

ML in the form of decision tree and logistic regression models are interpretable but lack accuracies [10]. AI in the form of Deep Neural Networks (DNNs) and ensemble models such as random-forest, XGboost and Adaboost have strong higher predictive power and accuracy as it works with multiple layers of hundreds and thousands of parameters and neurons while applying nested non-linear structure which makes them opaque and a complex blackbox [10, 27, 32, 71]. Various stakeholders have labelled the effort of allowing black boxes to make decisions as irresponsible as decisions created by these blackboxes are not justifiable, not interpretable, trackable, legitimate, trustworthy as the decision lacks detailed explanations [71].
