
Artificial intelligence (AI) might be galloping away at break-neck speed but authorities can still rein in the techno-beast, chair of the Australian financial regulator told an industry gathering last week.
Speaking at the University of Technology Sydney (UTS) hosted event, Joe Longo, Australian Securities and Investments Commission (ASIC) chair, said that while “generative AI technology is well advanced – and its development continues apace – we should not cede to defeatist notions that the horse has bolted”.
“Government and regulators can – and must – have a hand in shaping how AI technology is designed and deployed. It needs to accord with the values and rights on which our social stability and individual liberties depend,” Longo said.
He said overblown fears about AI developing sentience or other sci-fi-imagined scenarios or being too complex for humans to understand must be discounted in favour of a rational approach to governance.
“Like all technology, AI is the product of human ingenuity and can therefore, by definition, be understood,” Longo said. “Moreover, it is the job of government and regulators to ensure that these systems are explainable and transparent.”
Despite dismissing the prospect of an AI-led replacement of humanity, the ASIC chair said the technology presents more prosaic risks.
“One such risk that ASIC is concerned with, for example, is the use of AI to super-charge scams at scale. This is an issue that has sometimes been overshadowed by speculation about the more apocalyptic outcomes that have captured so much of the conversation around AI,” he said. “But, at an individual level, the consequences could be devastating.”
Longo said there was a growing consensus that authorities will have to fence-in the sector but leave enough room for AI to roam.
“… we need a strong regulatory framework to steer the course of AI towards its safe and responsible development and use,” he said.
Governments and regulators across the world – including in NZ – are considering ways to manage the risks posed by AI while harvesting the potential benefits.
In April this year, Daniel Trinder, Financial Markets Authority (FMA) executive director strategy and design, said the regulator expected NZ firms to manage AI risks under current governance and operational arrangements.
“Overtime we will assess whether our regulatory framework needs strengthening to support better deployment of Gen AI,” Trinder said at the time. “We welcome ongoing dialogue with all firms on their experiences of using Gen AI, and similar technologies, to enhance our understanding of the benefits and understand if there are risks materialising to consumers through the incorrect adoption of Gen AI.”
Last week the European Union Council approved new world-leading legislation that will regulate AI under a tiered-risk model.
Mathieu Michel, Belgian secretary of state for digitisation, administrative simplification, privacy protection, and the building regulation – whose official title could do with some administrative simplification – said in a release: “This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies.
“With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation.”
The AI Act comes into force two years after the EU completes the paperwork – a process likely to take a further few more weeks.