Majority of Bank Employees Utilizing Unapproved AI Tools
Read Time:1 Minute, 38 Second

Majority of Bank Employees Utilizing Unapproved AI Tools

The rise of shadow AI, where bank employees use unauthorized technology, presents an increasing risk to the financial services sector, according to recent research.

A study by AI vendor DeepL revealed that 65% of UK finance professionals admitted to using unapproved AI tools for customer interactions, which can lead to potential cybersecurity and regulatory issues. The research also indicated that 70% of respondents believe AI has enhanced the speed and accessibility of customer support, anticipating its essential role in cross-border banking.

Currently, 37% of banking interactions utilize AI, with multilingual communication being the most common application, followed by chatbots and transaction monitoring for fraud detection. However, the proliferation of shadow AI could impede technological advancement.

Another study from Cybernews found that 59% of U.S. workers utilize unapproved AI tools, with executives and managers being the primary offenders. Mantas Sabeckis, a security researcher at Cybernews, noted the risks involved: “If employees use unapproved AI tools for work, there’s no way to know what kind of information is shared with them. With tools like ChatGPT feeling conversational, people often forget that their data is shared with the company behind the chatbot.”

DeepL explained that shadow IT typically arises when teams lack access to necessary tools; for instance, using general-purpose AI when secure, purpose-built solutions are needed. To counteract this, firms must foster closer collaboration between customer-facing teams and IT departments to select appropriate technology.

David Parry-Jones, chief revenue officer at DeepL, emphasized the importance of addressing this issue: “In financial services, where every interaction is heavily regulated and reputational risk is significant, employees will look for alternatives if the tools provided do not meet their needs. The real risk lies not in staff experimenting with AI, but in companies failing to offer secure, suitable solutions.”

He advocates for a collaborative approach between IT and frontline teams to mitigate shadow AI, safeguard against cybersecurity threats, and harness the full benefits of reliable AI technology.