SEC Warning: AI and the Inevitable Financial Crisis
The U.S. Securities and Exchange Commission (SEC) has highlighted a new and pressing concern related to artificial intelligence, extending beyond the often-discussed issues of potential job displacement, challenges to human creativity, intellectual property violations, and existential risks to humanity.
The SEC’s chairman has issued a warning that advancements in AI could precipitate a financial crisis, describing such an event as “nearly unavoidable.
” This warning introduces an additional dimension to the already complex task of comprehending and regulating artificial intelligence in today’s society, underscoring the multifaceted impact AI could have on various aspects of global stability and security.
Artificial intelligence will cause financial crisis
Gary Gensler, the Chairman of the U.S. Securities and Exchange Commission, expressed to the Financial Times his concerns about the increasing threat posed by artificial intelligence in the financial sector. He warned that the growing reliance on AI systems could potentially lead to a collapse of the financial markets within the next decade.
Gensler’s cautionary remarks shed light on the risks associated with the uniformity of AI tools used by many financial institutions for various purposes, including market monitoring, account automation, and decision-making processes.
He advocates for the creation of regulatory measures that oversee not only the advanced AI models but also the financial institutions that utilize them. Yet, Gensler acknowledges the complexity of implementing such regulations.
He points out the current regulatory framework primarily focuses on individual entities, such as banks, brokers, or funds, rather than addressing the systemic risk that arises when multiple institutions rely heavily on the same AI model or data source. This scenario underscores a broader, systemic challenge in regulating the integration of AI within the financial sector.
Regulatory laws are not ready
As AI firms have largely favored self-regulation to address the risks associated with their technologies, there is an increasing push from governments worldwide for more rigorous oversight.
The European Union is actively working on an Artificial Intelligence Law, which might require developers of generative AI tools to undergo a review process prior to their public release. Similarly, the United States is evaluating the AI landscape to identify areas that may need regulation. Despite these efforts, both the EU and the USA are yet to establish concrete regulatory frameworks.
Major financial institutions such as Morgan Stanley and JPMorgan have embraced AI models to support investors and advisors. In contrast, firms like Goldman Sachs, Deutsche Bank, and Bank of America have taken a more cautious approach earlier this year by banning the use of ChatGPT by their employees.
The interaction between technology and finance has historically led to disruptions. An illustrative example is the 2010 incident involving a British trader who, from his family home in London, managed to manipulate the Chicago Mercantile Exchange with high-frequency orders, causing nearly $1 trillion to disappear momentarily. Given the advancements in generative AI, there’s a possibility for disturbances on an even larger scale in the future.
You may also like this content
- Google Unveils New Reasoning AI Model
- NVIDIA Announces New Jetson Orin Nano Super Kit That Could Change All AI Applications
- Artificial Intelligence Models Have Been Discovered To Fool Humans
Follow us on TWITTER (X) and be instantly informed about the latest developments…