Industry leaders in the field of generative artificial intelligence (AI) are urging the government to regulate the technology, marking a unique occurrence in technological revolutions. During a discussion with lawmakers, OpenAI CEO Sam Altman emphasized the need for rules to limit the potential dangers of generative AI. Altman proposed the establishment of a licensing agency for AI efforts above a certain capability threshold, testing of potentially risky AI models before deployment, and independent audits. Joined by IBM’s chief privacy and trust officer, Christina Montgomery, and NYU professor Gary Marcus, Altman presented their regulatory suggestions before a Senate Judiciary subcommittee.
The witnesses unanimously agreed on the importance of international bodies in setting standards and monitoring AI. Altman suggested using the International Atomic Energy Agency as a model for such an organization. However, despite the industry’s calls for regulation, AI legislation may face significant hurdles, given the historical difficulties in passing bipartisan bills on tech policy and the absence of a national privacy law.
Lawmakers expressed concerns about various issues related to generative AI, including election misinformation, job disruption, weaknesses in non-English languages, copyright problems, and the presence of dangerous and harmful content. The failure to regulate social media early on was cited as a cautionary example, highlighting the need for proactive measures with AI.
While some within the industry fear that early AI regulations could favor current industry leaders, critics argue that wider consultation beyond the industry is necessary. Further hearings on AI are scheduled, including one in July focused on copyright and patents. The general consensus among industry leaders and experts is that thoughtful regulation and rules are necessary for responsible AI development and deployment.
Leave a Reply