
Global regulators are racing to stay ahead of fast-developing artificial intelligence (AI) technology as it seeps into financial markets, according to a new report, with innovative tools likely needed to combat novel risks.
The International Organization of Securities Commissions (IOSCO) study says that just four years after its first take on the tech “it is clear that the growing use of advancements in AI potentially give rise to new or increasing issues, risks, and challenges, which would need to be closely monitored by regulators”.
While the technology was already well-established in some capital markets sectors when IOSCO published its inaugural review in 2021, AI use among financial industry participants has since rocketed on the back of the revolution led by the likes of ChatGPT.
“AI technologies have become increasingly common to support decision-making processes, in applications and functions such as robo-advising, algorithmic trading, investment research, and sentiment analysis,” the latest IOSCO paper says. “Regulated firms and third-party providers are also using AI technologies to enhance surveillance and compliance functions, particularly in anti-money laundering (AML) and counter-terrorist financing (CFT) related systems.”
IOSCO says a survey conducted for the report found financial firms in its member jurisdictions were toying with multiple other AI uses such as coding, trading, research, advice and even drafting an “investment research paper by adopting the writing style of certain investment analysts”.
But as financial services firms experiment at pace to drive productivity gains via AI, ‘bad actors’ have also quickly adopted the technology for more nefarious purposes.
For example, the power of so-called ‘generative AI’ systems to create human-like digital avatars has dramatically increased the potential for relationship-based rip-offs.
“Should malicious uses of AI techniques become widespread or egregious, it could erode investor trust in the provenance and truth of digital information and communications to the point of presenting broader risks, such as undermining trust in financial markets,” the
IOSCO report says. “Further, trust in the use of AI technologies overall could decline, potentially hindering AI technology development and its use for beneficial purposes.”
In such a fast-changing environment, IOSCO admits oversight of the financial system could be increasingly fraught even within their current boundaries.
“Regulators and others have limited understanding of how AI models work, what they a capable of, and what their impacts may be,” the study says. “If regulated firms use complex and opaque datasets, models, and systems, some of which are provided by non-regulated firms, data and knowledge gaps could widen.”
However, while a “one-size-fits-all” solution to the AI policing problem seems unlikely, the peak global regulatory body says identifying best practices, risk assessments and new technological tools could keep the robots in line.
Tuang Lee Lim,IOSCO Fintech Task Force chair, said in a release,: “The next phase for IOSCO will be to consider, as appropriate, the development of additional tools, recommendations, or considerations to assist members in addressing the issues, risks, and challenges posed by the use of AI technologies in financial products and services.”
Last week, IOSCO launched one new weapon in the war on financial fraud with a worldwide database of dodgy products.
The International Securities & Commodities Alerts Network (I-SCAN) is a “ unique global warning system where any investor, online platform provider, bank or institution can check if a suspicious activity has been flagged for a particular company by financial regulators worldwide”.
I-SCAN compiles scam data from IOSCO member regulators “making information available to all users and investors – anywhere in the world”, a statement says.