Collaborative Effort by Industry Giants and Leaders Establishes U.S. AI Safety Consortium

Date:

More than 200 companies and influential figures have come together to establish the U.S. AI Safety Institute Consortium (AISIC), aiming to safeguard the responsible development and deployment of artificial intelligence.

In response to an executive order by President Biden, the AISIC has been formed four months after the directive, with prominent players like Amazon, Google, Apple, Anthropic, Microsoft, OpenAI, and NVIDIA actively participating in this collaborative initiative.

The consortium, encompassing AI developers, academics, government and industry researchers, civil society organizations, and users, is dedicated to advancing the cause of “the development and deployment of safe and trustworthy artificial intelligence.”

Commerce Secretary Gina Raimondo emphasized that the AISIC is a direct outcome of President Biden’s October Executive Order, which outlined the need for guidelines concerning AI model evaluation, risk management, safety, security, and the application of watermarks to AI-generated content.

Raimondo stated, “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.

The consortium extends its reach to representatives from various sectors, including healthcare, academia, worker unions, and banking. Key participants include JP Morgan, Citigroup, Carnegie Mellon University, Bank of America, Ohio State University, and the Georgia Tech Research Institute, alongside state and local government representatives.

With a global perspective, the AISIC plans to collaborate with international partners, forming the “largest collection of test and evaluation teams established to date.” The focus is on building the foundations for a new measurement science in AI safety, fostering interoperable and effective tools globally.

Notably absent from the consortium are top tech companies Tesla, Oracle, and Broadcom, as well as TSMC, which, though not listed, is a non-U.S.-based company.

The proliferation of generative AI tools in mainstream use has led to numerous instances of misuse, particularly in the form of AI-generated deepfakes. Recognizing the threat, the U.S. Federal Communications Commission declared AI-generated robocalls using deepfake voices illegal.

The surge in such AI-related challenges prompted a meeting between the Biden Administration and key AI and tech companies, many of which are now part of the AISIC. Last year, OpenAI, Google, Microsoft, Nvidia, Anthropic, Hugging Face, IBM, Stability AI, Amazon, Meta, and Inflection pledged to develop AI responsibly.

Kent Walker, Google’s President of Global Affairs, emphasized the collaborative nature of responsible AI development, stating, “None of us can get AI right on our own. We’re pleased to be joining other leading AI companies in endorsing these commitments, and we pledge to continue working together by sharing information and best practices.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this

Ether ETFs: Complementary Role in Crypto Market Evolution

Ether ETFs as Bitcoin ETFs' Sidekick - Insights from...

Justin Sun’s $2B Bitcoin Fund: TRON’s Strategic Move

Justin Sun's $2 Billion Bitcoin Move and Its Market...

EU’s Travel Rule and Crypto Exchanges: Preparing for 2024 Compliance

The Countdown to 2024: EU’s Travel Rule and Crypto...

Awaited Bitcoin Repayment: Mt. Gox Creditors Brace for Extended Delay

Creditors of Mt. Gox Anticipate Longer Wait Times for...