Collaborative Effort by Industry Giants and Leaders Establishes U.S. AI Safety Consortium

Date:

More than 200 companies and influential figures have come together to establish the U.S. AI Safety Institute Consortium (AISIC), aiming to safeguard the responsible development and deployment of artificial intelligence.

In response to an executive order by President Biden, the AISIC has been formed four months after the directive, with prominent players like Amazon, Google, Apple, Anthropic, Microsoft, OpenAI, and NVIDIA actively participating in this collaborative initiative.

The consortium, encompassing AI developers, academics, government and industry researchers, civil society organizations, and users, is dedicated to advancing the cause of “the development and deployment of safe and trustworthy artificial intelligence.”

Commerce Secretary Gina Raimondo emphasized that the AISIC is a direct outcome of President Biden’s October Executive Order, which outlined the need for guidelines concerning AI model evaluation, risk management, safety, security, and the application of watermarks to AI-generated content.

Raimondo stated, “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.

The consortium extends its reach to representatives from various sectors, including healthcare, academia, worker unions, and banking. Key participants include JP Morgan, Citigroup, Carnegie Mellon University, Bank of America, Ohio State University, and the Georgia Tech Research Institute, alongside state and local government representatives.

With a global perspective, the AISIC plans to collaborate with international partners, forming the “largest collection of test and evaluation teams established to date.” The focus is on building the foundations for a new measurement science in AI safety, fostering interoperable and effective tools globally.

Notably absent from the consortium are top tech companies Tesla, Oracle, and Broadcom, as well as TSMC, which, though not listed, is a non-U.S.-based company.

The proliferation of generative AI tools in mainstream use has led to numerous instances of misuse, particularly in the form of AI-generated deepfakes. Recognizing the threat, the U.S. Federal Communications Commission declared AI-generated robocalls using deepfake voices illegal.

The surge in such AI-related challenges prompted a meeting between the Biden Administration and key AI and tech companies, many of which are now part of the AISIC. Last year, OpenAI, Google, Microsoft, Nvidia, Anthropic, Hugging Face, IBM, Stability AI, Amazon, Meta, and Inflection pledged to develop AI responsibly.

Kent Walker, Google’s President of Global Affairs, emphasized the collaborative nature of responsible AI development, stating, “None of us can get AI right on our own. We’re pleased to be joining other leading AI companies in endorsing these commitments, and we pledge to continue working together by sharing information and best practices.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this

Solana ETFs Spark Regulatory Shifts & Market Momentum

Spotlight on Spot Solana ETFs: Regulatory Shifts and Market...

Gary Gensler’s SEC Legacy: Reform, Crypto Regulation, and Controversy

Gary Gensler’s Tenure at the SEC: A Legacy of...

Bitcoin Resurgence: Ether vs. Bitcoin Amid Market Trends

Ether’s Decline Amid Bitcoin’s Meteoric Rise: A Closer Look In...

Sui Blockchain Faces Disruption: Impact on SUI Cryptocurrency

Sui Blockchain Faces Hour-Long Outage, Raising Concerns Over Reliability On...