Go deeper - join us!
Several companies have added their names to the list of AI developers committed to safety per the White House’s requests. According to The Verge’s Emilia David, this is the second such agreement.
The list of companies who agreed, to the best of their abilities, to create “safe, secure, and trustworthy AI” is as follows:
- – Adobe
- – IBM
- – Nvidia
- – Cohere
- – Palantir
- – Salesforce
- – Scale AI
- – Stability AI
- – Meta
- – OpenAI
David also reaffirms that this is a voluntary decision on behalf of the companies, serving as more of a statement of good faith than a binding agreement; as such, no fines or penalties will be levied toward the companies for failing to meet the terms of the accords.
The stipulations entailed in these accords lay out a road map regarding development and testing to ensure that AI is both created and implemented responsibly. “In a press release, the White House said the companies have agreed to create internal and external testing of AI systems before commercial release, invest in safeguards to protect model weights, and share information to manage risks with governments, civil society, and academia,” writes David.
Purportedly, additional safety measures such as third-party reporting and assessment of AI tools, identification (via watermark) of AI-generated content, transparency regarding risks and solutions, and an ongoing dedication to identifying and resolving “societal risks” in particular are all part of the agreement.
These accords complement the current zenith of AI development, something that has arrived with a surprising lack of legislation regarding the use and propagation of AI products. While the Biden administration has worked across international borders to attempt to create guidelines and restrictions for AI, progress is reportedly slow. The commitment from the aforementioned AI companies is a strong step in the right direction.
Similarly, the Biden administration is responsible for creating an “AI Bill of Rights” blueprint, which–among myriad concerns–lists stipulations for data privacy and the option to discuss or interact with a human when AI is insufficient.
As AI tools and content become even more commonplace than they already are, bipartisan support will be crucial in ensuring that this blueprint’s criteria are met.
This story was first published here in February, 2024.
