设为首页 - 加入收藏  
您的当前位置:首页 >時尚 >【】 正文

【】

来源:眼花耳熱網编辑:時尚时间:2024-12-22 16:36:40

As state and federal governments pursue AI regulation, Google has chimed in with its with own thoughts.

On Wednesday, the tech giant published a blog post titled "7 principles for getting AI regulation right." Unsurprisingly, the overall message is that AI should be regulated, but not to the extent that it hampers innovation. "We’re in the midst of a global technology race," wrote Kent Walker, president of global affairs for Google and parent company Alphabet. "And like all technology races, it's a competition that will be won not by the country that invents something first, but by the countries that deploy it best, across all sectors."

Google and AI companies like OpenAI have publicly taken a cooperative attitude towards AI regulation, citing the threat of existential risk. Google CEO Sundar Pichai participated in the Senate's AI Insight Forums to inform how Congress should legislate AI. But some in favor of a less-regulated, more open-source AI ecosystem have criticized Google and others of fear-mongering in order to achieve regulatory capture.

"Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment," said Meta Chief AI Scientist Yann LeCun, referring to the the CEOs of OpenAI, Google DeepMind, and Anthropic respectively. "If your fear-mongering campaigns succeed, they will inevitablyresult in what you and I would identify as a catastrophe: a small number of companies will control AI."

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

Walker referenced the White House AI executive order, the AI policy roadmap proposed by the U.S. Senate, and recent AI bills in California and Connecticut. While Google says it supports these efforts, AI legislation should focus on regulating specific outcomes of AI development, not broad strokes laws that stifle development. "Progressing American innovation requires intervention at points of actual harm, not blanket research inhibitors," said Walker who noted in a section about "striving for alignment" that more than 600 bills in the U.S. alone have been proposed.

The Google post also briefly touched on the issue of copyright infringement and how and what data is used to train AI models. Companies with AI models argue that utilization of publicly available data on the web constitutes fair use, they've been accused by media companies, and more recently major record labels of violating copyright and profiting from it.

Walker, essentially reaffirms the fair use argument, but acknowledges that there should be more transparency and control over AI training data, saying "website owners should be able to use machine-readable tools to opt out of having content on their sites used for AI training."

The principle about "supporting responsible innovation" covers "known risks" in general terms. But it doesn't get into specifics about, say, regulatory oversight to prevent flagrant inaccuracies in generative AI responses that could fuel misinformation and cause harms.

To be fair, no one actually took it seriously when Google's AI summary recommended putting glue on a pizza, but it's a recent example that underscores the ongoing discussion about accountability for AI-generated falsehoods and responsible deployment.

TopicsArtificial IntelligenceGoogle

热门文章

    0.8746s , 10360.78125 kb

    Copyright © 2024 Powered by 【】,眼花耳熱網  

    sitemap

    Top