The first afternoon panel concentrated on three key topics related to AI: data, culture, and skills.
The session, titled ‘What are the solutions to our limitations,’ was moderated by Finextra’s Gary Wright. The panel featured prominent speakers including James Benford, executive director and chief data officer at the Bank of England; Kshitija Joshi, Ph.D., vice president of data and AI solutions at Nomura International; Kerstin Mathias, policy and innovation director at the City of London; and Ed Towers, head of advanced analytics and data science units at the Financial Conduct Authority.
The discussion began with insights into the current AI initiatives within each organization. Towers mentioned a recent FCA survey revealing that 75% of financial firms are using some form of AI, with 17% employing generative AI. The majority of these applications are concentrated in lower materiality areas, particularly in financial crime prevention and back-office operations.
Benford shared that the Bank of England has maintained an advanced analytics division for approximately a decade, working on around 100 projects primarily using traditional AI. Examples included employing machine learning for policy formulation and analyzing the impact of unemployment on inflation. He noted that an AI task force was established last year to broaden the use of generative AI, particularly aimed at modernizing legacy code.
Joshi highlighted her pivotal role as the first hire in the centralized data science team at Nomura International, aimed at overseeing data governance and management. She pointed out that effective AI relies on high-quality underlying data, which is often not the case in the financial sector. This raises challenges concerning the testing for issues like toxicity, bias, and hallucinations on a large scale.
Joshi observed a marked shift in AI adoption before and after the introduction of ChatGPT. Prior to this, stakeholders were resistant to adopting data analytics, whereas post-ChatGPT, there has been a surge in interest in AI applications.
The conversation then shifted to education, with Joshi discussing how her team prioritized training at Nomura International. Mathias echoed this sentiment, stating that training is also a top concern for the City of London, emphasizing their commitment to maintaining London’s status as a leading financial center through AI advancements. She mentioned that they categorize their focus into three areas: internal policies, investment, and skills. There has been a dramatic increase in job advertisements seeking generative and conversational AI skills, underscoring the need for upskilling and reskilling existing workers alongside enhancing data and legacy systems.
On the topic of mitigating risks in AI models, Benford stressed the importance of establishing robust model and risk frameworks. He discussed the organization’s approach to evaluating internal policies and focusing resources on achievable improvements, noting the significance of sourcing documentation as a critical safeguard. Understanding that a model’s context can evolve, he highlighted the necessity of stress testing due to the unpredictability of a model’s behavior over time.
Lastly, Towers underscored the vital role of collaboration between regulators and the industry, noting the FCA’s efforts to engage in this process. He shared that the FCA published an AI update last year to clarify how existing policies, such as Consumer Duty, apply to AI, emphasizing that now is the time for meaningful cooperation between financial institutions and regulatory bodies.