Navigating the AI Landscape in the asset finance space
Adapted from a Leasing Life article by Jeremy Weltman, January 4 2024.
Adoption of AI in financial businesses
With more experimentation, an evolving regulatory regime and positive outcomes boosting confidence in its use, AI will continue at a rapid pace, as it has done in recent years.
A recent survey by Statista found that in 2022 there was a 46% widescale adoption rate among financial businesses worldwide, 25% with limited adoption, and 14% piloting its use. Only 6% were not using AI at all. Crucially, 8% of respondents regarded it as critical, with this share expected to rise to 43% by 2025.
Another survey, undertaken by the web security company Indusface finds that over a quarter of all UK finance and accounting businesses have adopted AI, while another by NVIDIA reveals that 35% of respondents report that AI applications create operational efficiencies, with 20% saying that it has reduced the cost of ownership.
Over a third of respondents claimed that it had lowered their annual costs by more than 10%. These benefits accrue by retaining customers, creating new business and improving the accuracy of modelling, lowering their risk exposures.
“AI has many different forms, and has existed for many years, in all walks of life. It sits behind just about everything we digitally interact with and, given this, it is perhaps surprising that its widespread appeal and excitement has only peaked recently.”
AI is involved every time a phone is used, an e-mail scanned, or a beverage is purchased. Highlighting its key uses in financial services include credit referencing, market forecasting and crucially fraud detection. It is only now, it seems, since the emergence of Chat GPT developed by Open AI, that everyone is talking about it.
A survey conducted by The Economist in 2022 noted how, among the many uses of AI, fraud detection was number one (57.6%), followed by optimising IT operations (53.7%), digital marketing (50.2%), risk assessment (48.3%) and personalising customer experience (43.9%).
AI is a very broad field, notes Flegg. It encompasses machine learning, neural networks, computer vision, robotics, and more. It has a huge number of applications, and it will rapidly evolve.
He mentions that finance has been ahead of the curve, employing AI in these previously labour-intensive areas, as well as other broader applications such as document processing and customer service enhancements – for optimal character recognition, call transcriptions and chatbots, etc.
This has been mainly handled by the big players given the scale, expertise and expense required, but the technology is becoming increasingly accessible, and cheaper to employ.
AI application in the leasing life cycle
AI can be applied in a range of applications across the leasing life cycle, including customer service, sales and marketing, credit underwriting, fraud prevention, predictive intelligence for collections, contract analysis, pricing, Know Your Customer (KYC) and ID verification, business intelligence and predictive analytics. It is increasingly key in driving productivity, improving decision-making, and enhancing customer experiences.
AI can take the form of in-house machine learning, using data scientists and AI algorithms. There are cloud companies, such as Amazon Web Services and Google Cloud Platform, providing AI solutions that include Amazon SageMaker, Amazon Bedrock or Vertex AI.
AI-as-a-Service companies, such as Jumio, provide small point solutions that can be easily integrated, and there are platform companies, such as Alfa or Salesforce providing AI and machine learning solutions integrated into their market-specific products.
Flegg says Alfa has been working on AI and machine learning solutions under the Alfa iQ brand for several years, with promising results that prove AI solutions are valuable, notably for the use of credit scoring.
The right solution invariably depends on the underlying business requirements. “It is always important to ensure before embarking on any AI project that the data behind it is well understood, congruent in its format, and easily accessible via clean application programming interfaces” he says. “It’s incredibly easy to build in different biases and skews by improperly merging data across different domains,” he warns.
Alleviating concerns
Naturally, there are many concerns about the use of technology that has the potential to transform human activity. Chatbots, for example, do not have a favourable reputation.
Flegg says that it is up to the industry to ensure the tooling that is deployed serves customers well by improving their experience rather than acting as a barrier to successful resolution.
Large Language Models (LLMs), such as ChatGPT, may enable chatbots to better provide customer service, but human interaction is still important. “Financial institutions need to find the right balance between AI-powered automation and human-centric interactions to ensure a positive customer experience,” he says.
AI algorithms are complex and opaque. It is difficult to understand their decision-making processes and this lack of transparency raises concerns about accountability, bias and fairness.
Flegg warns that AI systems can perpetuate existing biases in the data they are trained on, leading to discriminatory outcomes in areas such as credit scoring, insurance pricing and hiring decisions.
Then of course there is the issue of fraud. AI can help to detect and prevent fraud, but it also creates opportunities for fraudsters to exploit vulnerabilities in AI systems and other infrastructure.
“Financial institutions need to stay vigilant and continuously adapt their AI-powered fraud detection and cybersecurity mechanisms to stay ahead. It is a game of cat and mouse,” says Flegg.
“AI will have a profound impact on asset finance, but is not a one-size-fits-all solution. The types of AI solutions vary.”
An appropriate and responsible approach to AI is important as the risk of non-compliant use of these technologies could be thought of as a major concern. Security, privacy, and fraud are main concerns as fraudsters are innovating alongside the rapid change, creating an ever-evolving threat landscape.
As leasing is typically a ‘relationship business’, the potential for losing the human touch is also a key concern for the lessor and lessee.
The culture and history of financial institutions is steeped in the ability to make risk choices, so such a culture is naturally sceptical about tech-based solutions, in some areas actively resistant. This cultural risk-aversion can extend even to IT, for example in areas such as procurement. An established bank’s IT function may be uncomfortable engaging a fintech start-up to develop an AI solution, even with a strong business case.
Importantly, the current generation of AI most likely to be deployed in financial services businesses can be based on, or biased towards, the data within the organisation. This means that AI could end up reflecting, even reinforcing, any pre-existing organisational biases, such as towards or against customer profiles or characteristics.
This could become a challenge when a business is looking to grow new business or expand into new segments, where it lacks experience. It may be already the case in the UK today that SME business clients are feeling increasingly under-served by and prevented from funding from many established institutions.
A disadvantage could be that businesses that see AI/automation almost exclusively as a means of lowering costs by replacing people will eventually miss the skills and knowledge that built them up in the first place. If that is combined with a relatively narrow and/or internally/historically biased data set, that could lead to a spiral of decline and decrease.
Regulatory review
Many will be looking to the regulators for protection, though of course financiers are already incorporating their own frameworks of AI governance and risk management. So there is a question on if any new, specific AI regulation is required, especially around data security.
It should be noted that the current framework, such as the General Data Protection Regulation (GDPR), introduced by the EU, and either copied or acting as a benchmark for guidelines in other countries – including the UK – already covers automated decision-making on consumers (in Article 22). However, regulators globally are looking into whether the present laws are adequate.
This was demonstrated recently at the UK Government roundtable, or AI Safety Summit, held at Bletchley Park at the beginning of November 2023, looking into the risks of AI adoption and how to remain proactive in terms of regulation.
According to Flegg, part of the problem relates to users and stakeholders lacking an understanding of how AI makes decisions. “When they do, they are more likely to trust and accept it,” he says.
Flegg believes that regulation will inevitably become more stringent, building on Article 22 of the GDPR, the proposed Recital 6 amendment to the EU AI Act and the proposed FTC regulation to enforce what he describes as “explain-ability” and “interpret-ability” requirements.
AI - a positive impact
The advantages of AI are clear, as it will inevitably have a positive impact on businesses obtaining finance. AI will demonstrate it can make better credit decisions than humans or legacy scorecards. This may result in a greater number of creditworthy businesses being offered finance, and a smaller number of defaults.
“This is a win for both the industry and the real economy that is supported by finance products.” says Flegg.
However, many of the established banks and companies without much of a digital presence, (for example, those reliant on introduced business, such as from brokers), are nervous about the risks associated with AI and may not have the in-house digital or IT resources or capability to make the most of it.
The Nvidia survey (previously mentioned) finds that obtaining data scientists who are IT experts is becoming a real problem. Some may think it will not be a question of ‘humans versus AI’, but of ‘humans plus AI’ and others may agree that it probably will not massively change employment in the very short term.
The first examples seen have not necessarily led to headcount reductions, but instead have allowed operations or coverage to expand – such as customer online/out-of-hours self-service, whilst growing the human service levels within conventional working times.
As the tech performance advances, it may be believed it is ever more likely that it will take over human activities, rather than supplementing or complementing them.
Adapted from a Leasing Life article by Jeremy Weltman, January 4 2024. Read the full article