As artificial intelligence (AI) innovation breaks through news cycles and captures public attention, a framework for its development and responsible and ethical use has become increasingly critical to ensure this unprecedented wave of technology reaches its peak. maximum potential as a positive contribution to economic and social progress. .
The European Union has already been working to enact laws around responsible AI; I shared my thoughts on those initiatives. almost two years ago. So the AI ​​Act, as it is known, was “an objective and measured approach to innovation and social considerations.” Today, the leaders of technology companies and the US government are coming together to chart a unified vision for responsible AI.
The power of generative AI
ChatGPT’s OpenAI launch captured the imagination of technology innovators, business leaders and the public last year, exploding consumer interest and understanding of the capabilities of generative AI. However, with the pervasiveness of artificial intelligence, including as a political issue, and the propensity of humans to experiment and test systems, the potential for misinformation, the impact on privacy, and the risk to cybersecurity and fraudulent behavior they risk quickly becoming an afterthought.
In an initial effort to address these potential challenges and ensure responsible AI innovation that protects the rights and safety of Americans, the White House has announced new actions to promote responsible AI.
In a fact sheet released by the White House last week, the Biden-Harris administration outlined three actions to “advance responsible American innovation in artificial intelligence (AI) and protect the rights and safety of the people.” These include:
- New investments to drive responsible US R&D in AI.
- Public evaluations of existing generative AI systems.
- Policies to ensure the US government leads by example in mitigating AI risks and taking advantage of AI opportunities.
new investments
When it comes to new investment, the $140 million from the National Science Foundation to launch seven new National AI Research Institutes pales in comparison to what private companies have raised.
While directionally correct, US government investment in AI is generally microscopic compared to government investments in other countries, viz. China, which started investments in 2017. There is an immediate opportunity to amplify the impact of the investment through academic partnerships for workforce development and research. Government should fund AI hubs alongside academic and corporate institutions that are already at the forefront of AI research and development, driving innovation and creating new opportunities for AI-powered companies.
Collaborations between AI centers and leading academic institutions, such as MIT’s Schwarzman College and Northeastern’s Institute for Experimental AI, help bridge the gap between theory and practical application by bringing together academic, industry, and industry experts. government to collaborate on cutting-edge research and development projects that have real-world applications. By partnering with leading companies, these centers can help companies better integrate AI into their operations, improving efficiencies, cost savings, and better consumer outcomes.
In addition, these centers help educate the next generation of AI experts by giving students access to cutting-edge technology, hands-on experience with real-world projects, and mentorship from industry leaders. By taking a proactive and collaborative approach to AI, the US government can help shape a future where AI enhances, rather than replaces, human labor. As a result, all members of society can benefit from the opportunities created by this powerful technology.
public evaluations
Model evaluation is critical to ensuring that AI models are accurate, reliable, and free from bias, which is essential for successful implementation in real-world applications. For example, imagine an urban planning use case where generative AI is trained in cities marked in red with historically underrepresented poor populations. Unfortunately, it will only lead to more of the same. The same is true of lending bias, as more financial institutions are using artificial intelligence algorithms to make lending decisions.
If these algorithms are trained on data that discriminates against certain demographic groups, they may unfairly deny loans to those groups, leading to economic and social disparities. Although these are just a few examples of bias in AI, this should be a priority no matter how quickly new AI technologies and techniques are developed and implemented.
To combat bias in AI, the administration has announced a new opportunity for model evaluation at DEFCON 31 AI Village, a forum for researchers, practitioners, and enthusiasts to come together and explore the latest advances in artificial intelligence and machine learning. The model evaluation is a collaborative initiative with some of the key players in the space, including Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI, and Stability AI, leveraging a platform offered by Scale AI.
In addition, it will measure how the models align with the principles and practices outlined in the Biden-Harris administration’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology (NIST) AI Risk Management Framework. This is a positive development where management engages directly with business and capitalizes on the expertise of technical leaders in the space, who have become corporate AI labs.
Government policies
With regard to the third policy action to ensure the US government leads by example in mitigating AI risks and taking advantage of AI opportunities, the Office of Administration and Budget should draft policy guidance on the US government’s use of AI systems for public comment. . Again, no timeframe or details have been given for these policies, but an executive order on racial equity issued earlier this year it is expected to be at the forefront.
The executive order includes a provision directing government agencies to use AI and automated systems in a way that promotes fairness. For these policies to have a significant impact, they must include incentives and repercussions; they cannot be merely an optional guide. For example, the NIST standards for security are effective requirements for implementation by most government agencies. Not adhering to them is, to say the least, incredibly embarrassing for the people involved and grounds for personal action in some parts of the government. Government AI policies, as part of NIST or otherwise, must be comparable to be effective.
Furthermore, the cost of adhering to such regulations should not be an obstacle to innovation driven by start-ups. For example, what can be achieved in a framework whose cost to comply increases with the size of the company? Finally, as the government becomes a major purchaser of AI platforms and tools, it is paramount that its policies become the guiding principle for building such tools. Make compliance with this guidance a literal, or even effective, requirement for purchase (for example, the FedRamp security standard), and these policies can move the needle.
As generative AI systems become more powerful and pervasive, it is essential that all stakeholders, including founders, operators, investors, technologists, consumers, and regulators, be thoughtful and intentional in seeking out and engaging with these technologies. While generative AI and AI more generally have the potential to revolutionize industries and create new opportunities, it also poses significant challenges, particularly around issues of bias, privacy, and ethical considerations.
Therefore, all stakeholders must prioritize transparency, accountability and collaboration to ensure that AI is developed and used responsibly and beneficially. This means investing in ethical AI research and development, engaging with diverse perspectives and communities, and establishing clear guidelines and regulations for developing and deploying these technologies.
—————————————————-
Source link