Nvidia, IBM, Salesforce agree to rules to prevent AI harms: White House

The White House on Tuesday announced a number of firms including Nvidia (NVDA), IBM (IBM), and Salesforce (CRM) are joining the Biden administration's voluntary rules designed to limit the risks of artificial intelligence. The commitments are broken down into three categories, including ensuring AI products are safe before introducing them to the public, putting security first, and earning the public's trust around the technology.

Adobe (ADBE), Cohere, Palantir (PLTR), Scale, and Stability round out the list of eight companies entering into the agreement. Amazon (AMZN), Anthropic, Google (GOOG, GOOGL) Inflection, Meta (META), Microsoft (MSFT), and OpenAI volunteered to join the administration's efforts.

The Nvidia office building is shown in Santa Clara, Calif., Wednesday, May 31, 2023. Computer chip maker Nvidia has turned the artificial intelligence craze into a springboard that has catapulted the company into the constellation of Big Tech’s brightest stars. The company reports earnings on Wednesday. (AP Photo/Jeff Chiu)
Nvidia is one of a number of companies that have agreed to follow a series of rules governing AI. (AP Photo/Jeff Chiu) (ASSOCIATED PRESS)

"These commitments represent an important bridge to government action, and are just one part of the Biden-Harris Administration’s comprehensive approach to seizing the promise and managing the risks of AI," the White House said in a statement. "The Administration is developing an Executive Order and will continue to pursue bipartisan legislation to help America lead the way in responsible AI development."

Under the terms of the agreement, the companies will allow for internal and external security testing of their AI systems before releasing them including testing how their technologies impact society, biosecurity, and cybersecurity, as well as committing to sharing safety best practices.

The firms also say they'll invest in cybersecurity and other safeguards to protect against leaks and hacks of their unreleased model weights. They'll also allow for third-party reporting of potential gaps in security in their own systems.

Sign up for the Yahoo Finance newsletter.
Sign up for the Yahoo Finance newsletter. (Yahoo Finance)

On the public trust front, the companies will ensure that users can understand when content is generated by AI via technologies including watermark systems. In August, Google announced its own watermark tech called SynthID, which embeds markers directly into images created by its Imagen text-to-image generator.

The tech firms further say they'll research the potential societal risks of AI and put efforts toward addressing problems ranging from climate change to cancer research.

The AI industry has exploded in popularity thanks to OpenAI, which released its generative AI-powered ChatGPT bot in November 2022. Microsoft, which is investing billions in OpenAI, released its own Bing chatbot and Edge browser in February. Google parent Alphabet has also released a version of its Bard bot and is working on an experimental version of its search engine that uses generative AI.

But the technology’s growth and pace of innovation have also spurred fears that AI could be used to do everything from spreading disinformation to taking away jobs from workers across various industries.

According to a poll conducted by the Pew Research Center, 52% of Americans surveyed said they were more concerned than excited about the use of AI in their daily lives.

Daniel Howley is the tech editor at Yahoo Finance. He's been covering the tech industry since 2011. You can follow him on Twitter @DanielHowley.

Click here for the latest technology business news, reviews, and useful articles on tech and gadgets

Read the latest financial and business news from Yahoo Finance

Advertisement