An important part of the operations of banks and other financial institutions is compliance with AML (Anti Money Laundering) law, which aims to fight financial crime such as identity fraud, terrorist financing or illegal transactions.
The following blog explores how is AI already helping banks conduct AML & KYC. As a bonus, we also explore the concept of explainable AI from a banking point of view, and how it can help financial institutions in their operations.
What is KYC?
An integral subset of AML laws is KYC, or Know Your Customer, which refers to the operations a financial institution carries to verify the identity and assess the risk of each of its customers and transactions. KYC is a mandatory procedure for all banks in order to onboard new customers and remain compliant.
KYC is composed of three parts, all of which are equally as important in fighting financial crime:
- CID (Customer Identification Program): Ensuring that every customer is who they claim to be;
- CDD (Customer Due Diligence): Determining the risk for illegal financial transactions for each customer;
- Ongoing monitoring of all customers and accounts: This last step is crucial in the KYC process, as a customer could be using an account for some time before they start making illegal transactions.
How is AI useful in AML, and specifically KYC?
Traditionally, a customer will physically go to the front office of a bank to, for instance, open an account or request a loan. The KYC process will then begin. Nowadays, it is also possible to do this online, where the customer can upload proof of identity and all the relevant documents. However, this process can take anywhere from weeks to months, as long as it takes to review all relevant documentation.
By leveraging AI in the KYC process, both the processing time, manual labor and human-error derived risks can be significantly reduced, whilst improving customer experience and customer conversion. AI can help in all steps of the KYC process, which we have listed below:
1) Customer Identification Program (CID):
Technologies such as OCR (Optical Character Recognition) are being used to automatically process documents where potential customers indicate personal data such as name, gender, address, or social security number. This data is then compared to the official documents and anomaly detection models are used to assess whether there are any discrepancies between the provided data and the official data (e.g., edited date of birth or name, big discrepancies in handwriting for example in signatures).
Another way in which AI is being used to automate parts of the CID process is facial recognition models, which are used to compare the photos on official documents, such as passports, to photos of the customer collected in the onboarding process (such as a selfie or a video provided by them). In essence, this technology helps banks in determining whether the customer is indeed the owner of the documents they are presenting. This was particularly useful during the COVID-19 pandemic, where front offices in banks were closed and the only way to onboard new customers was via online application.
Liveness detection models are also used to counter fraud. In some cases, fraudsters will attempt to impersonate someone by taking selfie-videos of them when they are unaware, asleep, or even using a static picture of them. Liveness detection models will analyze the movement of someone in a video and determine whether they are moving correctly or are aware that their photo is being taken.
2) Customer Due Diligence (CDD) and Ongoing Monitoring:
Another mandatory step of the KYC process is screening a customer against lists of Politically Exposed Persons (PEPs) & Sanctions and investigate relationships to prevent potential money laundering. Some traditional tools generate 'hits' when a customer simply has the same last name as someone who is suspected or accused of money laundering, resulting in many false positives. An employee of the institution will then manually confirm this with Open Source Intelligence (OSINT) i.e. news articles, social media, etc.
This can take hours to investigate per customer. AI can reduce false positives by producing a risk score incorporating various data points such as device geolocations, IP addresses, etc.
Financial institutions will often perform portfolio risk segmentation in the KYC process, where an institution’s portfolio is divided into low, medium or high risk clients. Commonly, this process is based on expert business rules. AI can be used to accelerate this process, for example using optimization models to prioritize the portfolio risk backlog, by optimizing the weight of business rules applied. AI can also be leveraged to aid in the segmentation itself; there are several segmentation models that can be used to divide an institution’s portfolio into risk buckets.
It is worth noting that most institutions will not leave the segmentation entirely up to the AI models, but rather will use their outputs in combination with expert rules, in order to remain compliant with regulation and retain explainability.
What is explainable AI in the banking sector, and how does it help banks?
Financial institutions are some of the most regulated bodies in society. For instance, regulators such as central banks (among others) ensure that banks have processes in place that oblige the ‘Duty of Care’. According to this obligation, banks should conduct thorough research into their clients to ensure the interaction with the bank does not pose an unnecessary financial risk on the client. For instance, whenever a client applies for a loan, it’s the bank’s duty to ensure the client will be able to pay back the requested loan with interest. The Duty of Care also extends to ensuring the client has enough resources left to make a decent living.
Conducting extensive background checks on clients implies that banks have the resources and processes to execute these checks, and can set up processes to collect, clean and store the required data on clients to determine their financial position. These implications result in banks having to store and maintain vast amounts of data and expand on their (cloud) infrastructure to meet the demand of the regulator.
Although the intention of the regulator is reasonable, executing these obligations is not as straightforward as it sounds. Over recent years, obligations for banks have grown out of proportion. For insurance, a legislation that poses a lot of stress on banks in The Netherlands is the WWFT. In this act, banks are obliged to perform background checks on their customers and their transactions. With the advancement of the digital age, the number of transactions banks have to process nowadays is enormous. Since processing and checking all of this information manually is impossible, banks opt for automated approaches to be compliant with the WWFT. Simultaneously with the recent rise of AI adoption, it has become increasingly attractive for banks to implement machine learning techniques to aid them in automated decision making.
However, this directly conflicts with the bank’s Duty of Care when it comes to the obligation to ensure that their decisions remain explainable for years. Machine learning techniques and other AI models are often mathematically complex and it’s often very difficult to properly explain the rationale behind their outcomes. This conflict marks one of the biggest challenges that banks currently face in this domain: On the one hand, banks are in desperate need of automated decision making due the explosion of data and digital transactions, and on the other, they are obliged to comply with regulatory requirements.
The solution to this dilemma is the rise of novel technologies that can explain the rationale of an AI model. These technologies are often referred to as ‘Explainable AI’.
Explainable AI enables translating model insights to a humanly understandable format, which aids in understanding/explaining model behavior once the model is implemented. For instance, explaining to a client why their loan application was rejected. Similarly, ensuring the explainability of AI to regulatory bodies also mitigates the risk of AI models being shut down, alongside their return of investment. In addition, implementing explainable AI helps in addressing misconceptions that often surround AI, such as the misconception that AI’s decisions and their repercussions lack accountability. It also helps mitigate certain weakness points in AI, such as bias in AI (AI is inherently biased, and will reproduce bias in the data that it is fed).
Moreover, in many cases, ensuring that AI applications are explainable is required by the regulator. For instance, the EU’s AI Act specifically requires FSI institutions to be able to explain their use of AI. In other cases, like the case of loan approval or denial, a strong focus on the explainability of all their models, decisions and processes is necessary to create better relationships with customers.
The banking sector is slowly exploring the ways it can employ AI technology, and in this blog we discussed ways that AI is currently being used by banks and other financial institutions, specifically in for AML and KYC use cases.
Xomnia is the leading AI consulting company in the Netherlands, with a broad portfolio of leading FSI clients. Our consultants can help your organization identify the opportunities AI can offer you, and implement these AI solutions from beginning to end. Curious to know more? Get in touch with us.