Australia’s Securities and Investment Commission (ASIC) is urging financial services and credit licensees to enhance their governance practices in light of the increasing adoption of artificial intelligence (AI).
This recommendation follows ASIC’s inaugural state of the market review, which examined the use and implementation of AI by 23 licensees. The review indicated a possible gap between governance frameworks and AI adoption, despite the current cautious utilization of AI.
ASIC Chair Joe Longo emphasized the importance of updating governance frameworks to address future challenges posed by AI technology. “Our review shows that licensees have primarily used AI to support human decision-making and improve efficiencies. However, the adoption rate is rapidly increasing, with around 60% of licensees planning to expand their AI usage, potentially altering its impact on consumers,” he stated.
The findings revealed that nearly half of the licensees lacked policies that addressed consumer fairness or bias, and an even smaller percentage had established policies regarding the disclosure of AI utilization to consumers. Longo remarked, “It is evident that prompt action is required to ensure governance is adequate for the anticipated surge in consumer-facing AI. Without proper governance, there is a risk of misinformation, unintended discrimination or bias, manipulation of consumer sentiment, and failures in data security and privacy, all of which could harm consumers and damage market confidence.”
Longo urged licensees to consider their existing obligations regarding consumer protection in relation to AI deployment, rather than waiting for the introduction of specific AI laws and regulations. “In cases of misconduct, ASIC will pursue enforcement actions when appropriate and necessary,” he added.
This governance gap is not limited to Australia. A survey of 200 US compliance professionals conducted by ACA Aponix and the National Society of Compliance Professionals found that only 32% have established an AI committee or governance group. Furthermore, only 12% of those utilizing AI have adopted an AI risk management framework, and merely 18% have implemented formal testing programs for AI tools.
Most respondents (92%) have not yet adopted policies or procedures to govern the use of AI by third parties or service providers, leaving organizations exposed to cybersecurity, privacy, and operational risks within their third-party networks.
“We’re seeing widespread interest in using AI across the financial sector, yet there’s a clear disconnect when it comes to establishing the necessary safeguards,” said Lisa Crossley, Executive Director of NSCP. “Our survey indicates that while many firms recognize the potential of AI, they lack the frameworks to manage it responsibly. This gap not only exposes firms to regulatory scrutiny but also highlights the need for robust AI governance protocols as usage continues to grow.”
For those interested in exploring the evolving challenges and opportunities AI presents in the banking sector, there’s an upcoming event—Finextra’s first NextGenAI conference on November 26, 2024. Register your interest here.