Biden-Harris Administration Secures Commitments from Leading AI Companies to Manage AI Risks



 The Biden-Harris Administration has announced that it has secured voluntary commitments from seven leading AI  companies to manage the short and long-term risks of AI models. The companies include OpenAI, Amazon, Anthropic, Google, Inflection, Meta, and Microsoft. These commitments aim to ensure the safety of AI  products and to address issues such as cybersecurity, bias, and privacy.


The commitments secured by the companies include implementing internal and external security testing of AI  systems before their release,  as well as sharing information on managing AI  risks. They also commit to investing in cybersecurity measures to protect proprietary and unreleased model weights and to allow third-party discovery and reporting of vulnerabilities in their AI  systems.


To ensure transparency, the companies will develop systems such as watermarking to identify AI-generated content. They will also publicly report on the capabilities, limitations, and appropriate use of their AI  systems. Furthermore, they commit to prioritizing research on societal AI  risks,  including bias and privacy protection.


In addition to managing risks, the companies also pledge to develop and deploy advanced AI  systems to tackle society’s greatest challenges,  such as cancer prevention and climate change mitigation.


While these commitments are voluntary and not enforceable,  they are seen as an important first step in advancing AI governance. The companies acknowledge that more needs to be done to ensure the safety and trustworthiness of AI technology. OpenAI, in particular, has emphasized the significance of these safeguards in promoting effective AI governance worldwide.


Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights,  views these voluntary commitments as a  positive first step. However, he emphasizes the need for legislation to address the wide range of risks posed by generative AI,  including transparency,  privacy protections, and increased research.


The Biden-Harris Administration sees this announcement as part of its broader commitment to responsible and safe AI  development, aiming to protect Americans from harm and discrimination. The Administration is in the process of developing an executive order and pursuing bipartisan legislation to lead the way in responsible innovation.


These voluntary commitments from the AI  industry come ahead of significant Senate efforts to address AI  policy and legislation. The Senate plans to hold a series of AI “Insight Forums”  involving experts in various fields related to AI. These forums will contribute to the development of AI  policy and consensus around legislation.


Suresh Venkatasubramanian, former White House AI  policy advisor, sees the value of voluntary efforts alongside legislation and regulatory measures. He believes that even voluntary actions help organizations understand the need for AI governance and structural organization. He also ends the possibility of an upcoming executive order intriguing,  as it represents a concrete unilateral power for the White House.


While these commitments mark an important step in managing AI  risks, ongoing efforts are necessary to ensure the safety, transparency, and ethical use of AI technology.

Comments

Popular posts from this blog

How the scorpion venom business can be very recruiting

Men, nobody got rich with a salary.

How do I make more money with little money?