As policymakers around the world, including European Union lawmakers, decide how to regulate artificial intelligence (AI), some Fortune 500 companies have begun identifying AI regulation as a potential business risk. Prominent figures in the AI industry, such as Sam Altman of OpenAI, have advocated for AI regulation. However, the uncertainty surrounding future laws is a growing concern for these major corporations.
In their annual filings, companies have cited compliance costs and the penalties associated with potential breaches as key risks. As tech leaders debate appropriate regulations, legal departments within these companies are highlighting the challenges posed by the nascent and inconsistent rules. CEOs from major AI organizations, including DeepMind and OpenAI, have called for varying degrees of regulatory oversight to prevent misuse of the technology.
A recent analysis by Arize AI, a startup that assists companies in managing generative AI systems, found that 137 of the Fortune 500 companies — approximately 27% — identified AI regulation as a significant business risk in their annual reports filed with the Securities and Exchange Commission as of May 1. The number of companies citing AI as a risk in these reports increased nearly 500% between 2022 and 2024, based on Arize’s data. In these reports, companies expressed concerns about the potential cost implications of new laws, penalties for non-compliance, and regulations that could hinder AI development.
Fortune 500 concerns over AI regulation
Notably, they are not necessarily opposing AI laws; rather, they are alarmed by the current lack of clarity regarding these laws, their enforcement, and global consistency. For example, California’s legislature has passed a state-level AI bill, adding to the uncertainty of whether Governor Gavin Newsom will sign it into law and if other states will follow suit.
Jason Lopatecki, CEO of Arize AI, remarked, “The uncertainty created by an evolving regulatory landscape clearly presents real risks and compliance costs for businesses that rely on AI systems for everything from reducing credit card fraud to improving patient care or customer service calls.”
Companies’ annual reports often inform investors about a range of potential business risks, now including AI regulation. Meta, in particular, increased its mention of AI-related risks from 11 times in its 2022 report to 39 times in 2023. The tech giant devoted an entire page to discussing the risks associated with its AI initiatives, acknowledging the unpredictability of future regulations.
Motorola Solutions noted that compliance with AI regulations could be “onerous and expensive,” and possibly inconsistent across different jurisdictions, further complicating compliance and raising liability risks. NetApp, a data infrastructure company, recognized the importance of using AI responsibly but acknowledged potential difficulties in preemptively addressing issues. George Kurian, CEO of NetApp, acknowledged the need for both industry and formal regulation, emphasizing that well-focused regulation could be beneficial.
“If regulation is focused on enabling the confident use of AI, it can be a boon,” he said. As AI continues to evolve rapidly, companies and regulators alike must navigate the complex landscape to strike a balance between innovation and safeguarding against misuse and unforeseen consequences.
- BusinessInsider.”More and more big companies say AI regulation is a business risk”.
- PrivateEquityInternational.”Prepare for more SEC scrutiny of AI”.
- MediaPost.”3.know: Sonata’s Debra Aho Williamson Keeping An Eye On AI 08/30/2024″.