A long-running Senate working group has issued its policy recommendation for federal AI funding: $32 billion annually, covering everything from infrastructure to grand challenges to national security risk assessments.
This “roadmap” is not a bill or a detailed policy proposal, but it gives an idea of the scale that lawmakers and “stakeholders” look at when they get down to the real thing, although the likelihood of that happening over an election year is evanescently small.
In a final report released by the office of Sen. Chuck Schumer (D-N.Y.), the bipartisan task force identifies the most important areas of investment to keep the United States competitive against rivals abroad.
Here are some highlights from the roadmap:
- “An intergovernmental AI research and development effort, including relevant infrastructure,” which means getting the DOE, NSF, NIST, NASA, Commerce, and a half-dozen other agencies and departments to format and share data in a way that is compatible with the AI. In some ways, this seemingly relatively simple task is the most daunting of all and will likely take years to complete.
- Fund American AI hardware and software work at the semiconductor and architectural level, both through the CHIPS Act and elsewhere.
- Fund and further expand the national AI research resource, which is still in its infancy.
- “AI Grand Challenges” to stimulate innovation through competition in “AI applications that would fundamentally transform the process of science, engineering or medicine, and in fundamental issues in the design of safe and secure software and hardware.” efficient”.
- “Support AI and cybersecurity preparedness” in the elections, particularly to “mitigate AI-generated content that is factually false, while continuing to protect First Amendment rights.” Probably harder than it looks!
- “Modernize the federal government and improve the delivery of government services” by “upgrading IT infrastructure to utilize modern data science and artificial intelligence technologies and deploying new technologies to find inefficiencies in US code, federal rules and procurement programs. I understand what they’re saying here, but that’s a lot for an AI program.
- Lots of vague but important defense-related stuff, like “AI-enhanced chemical, biological, radiological, and nuclear (CBRN) threat assessment and mitigation by DOD, Department of Homeland Security (DHS), DOE, and other relevant agencies ”.
- Examine the “regulatory gap” in finance and housing, where AI-driven processes can be used to further marginalize vulnerable groups.
- “Review whether other potential uses of AI should be extremely limited or prohibited.” After a section on potentially harmful things like AI-powered social scores.
- Legislation prohibiting AI-generated child sexual abuse material and other non-consensual images and media.
- Ensure the NIH, HHS, and FDA have the tools necessary to evaluate AI tools in medical and healthcare applications.
- “Establish a consistent approach to public-facing transparency requirements for AI systems,” private and public.
- Improve the general availability of “content provenance information”, i.e. training data. What was used to make a model? Is the model used to train it further? Etc. AI makers will fight this tooth and nail until they can sufficiently sanitize the ill-gotten amounts of data they used to create today’s AIs.
- Look at the risks and benefits of using private vs. open source AI (if the latter ever exists in a form that can scale).
You can read the full report here; There are many more points where the above comes from (a longer list than I anticipated). No budget figures are suggested.
Since the next six months will be spent mostly on election-related mumbo-jumbo, this document serves more to put a lot of general ideas into play than to push actual legislation. Much of what is proposed would require months, if not years, of research and iteration before arriving at a law or standard.
The AI industry is advancing faster than the rest of the tech sector, meaning it outpaces the federal government by several orders of magnitude. Although the priorities listed above are mostly prudent, one wonders how many of them will still be relevant when Congress or the White House actually takes action.