Skip to main content
Support

Event Summary | Assessing the AI Agenda: Policy Opportunities and Challenges in the 117th Congress

A Critical Non-Partisan Policy Concern 

Artificial intelligence (AI) occupies a growing share of the legislative agenda.  On December 3rd, the Wilson Center’s Science and Technology Innovation Program (STIP) hosted its first all congressional staff panel “Assessing the AI Agenda in the 117th Congress.” This bipartisan, bicameral event brought panelists together with an array of perspectives on AI--including graduates of The Wilson Center's Technology Labs, a series of educational seminars on for Congressional (and soon, Executive) staff. Together, panelists discussed the greatest legislative achievements of this Congress, and challenges needed to be addressed next year.The result of this discussion is the start of an AI agenda for the 117th Congress.

Why Assess the AI Agenda Now?

One of the most successful legislative achievements to date is the number of critical AI bills included in the 2021 National Defense Authorization Act (NDAA). This year’s NDAA includes the Artificial Intelligence Initiative Act (AIIA), first introduced in May 2019.  The AAIA creates a National Artificial Intelligence Research and Development Initiative, and directs the Office of Science and Technology Policy (OSTP) to establish a National Artificial Intelligence Coordination Office, among other provisions. According to Dahlia Sokolov, Staff Director of the House Science Subcommittee on Research and Technology, the AIIA is “a really, really strong piece of legislation that, unless things go sideways, should be enacted as part of the NDAA.” Also included in the NDAA is the National AI Research Resource Taskforce Act, which would develop the first National Cloud for scientists and students to access computing resources and datasets.  Senate AI Caucus co-chairs introduced the bill in June along with Reps. Anna Eshoo, Anthony Gonzales and Mikie Sherrill in the House.

The 2021 NDAA builds on numerous successes. In 2019, Senators Rob Portman and Martin Heinrich founded the Senate AI Caucus staffed by two graduates of the Wilson Center’s Technology Labs. This initiative offered the Senate a venue for exchanging expertise and hosting major conversations about AI governance issues. The last congress already saw three AI bills signed into law. By the end of the 116th, at least six more proposals may also be signed. “What this means is we actually stand to get every single AI Caucus initiative signed into law by the end of this Congress…so I think we’re all ears on what we should do next,” said Sam Mulopulos, Senior Advisor for technology and trade issues to Senator Portman.

What’s Next for AI Policy?

In addition to reflecting on previous achievements, panelists discussed opportunities for future work around (1) STEM Education, broadening participation, and workforce development; (2) ethics, transparency, and accountability; and (3) competitiveness and trade.

Science, Technology, Engineering, and Math (STEM)

On STEM education, the panel argued that more federal involvement is necessary to develop a K-12 curriculum and to increase incentives. Mike Richards, Deputy Chief of Staff to Rep. Pete Olson said, “Part of what we need to do is start really looking internally about how we can make sure our children are...educated enough to tackle these jobs in the future. How do we start incentivizing these kids wanting to be and who are interested in STEM education? I think that Congress should look at those ways.”

Sokolov recognized that her position on the House Science Committee offers a unique opportunity. “We’ll certainly be looking to those tools--the scholarship and fellowship kind of tools--and be thinking harder and more deeply about this diversity issue in terms of what really works...because we’ve kind of been running on this hamster wheel on the diversity issue for decades now.”

She took a deeper dive into the diversity issue: “One of [Chairman Johnson’s] priorities is looking at rural access in rural areas and systemic education. [She and Ranking Member Lucas] have really worked as partners across that whole universe of what it means to broaden participation in STEM--specifically AI. We’ll continue to do that on the next science committee in the next Congress.”

Just as we need more diversity, we also need a larger talent pool. However, there is growing concern that AI will reduce jobs, rather than enhance or redefine current jobs. One solution is targeted efforts to enhance the existing workforce, including by raising skills and expertise among the general public, and within government. 

“I think one really important thing is really increasing the armed services usage and getting a talent pool in effect for the armed services for AI,” said Sean Duggan, Military Legislative Assistant to Senator Martin Heinrich. One forthcoming policy proposal will address armed forces recruitment tests, to include assessments of AI or advanced computing skills.  Another bill “would allow the secretary to take advantage of current authorities [to] directly hire professionals with AI and advanced computing specialties.”

Ethical, Transparent, and Accountable AI

Panelists also focused on the factors required to support safe and ethical AI--beginning with standards, and encompassing transparency, accountability, and trust.

Many speakers, beginning with Mulopulos, identified the need for foundational work, including “the development of a framework for best practices, guidelines, and voluntary consensus standards.”  Such efforts might begin with “agreeing on definitions” (Senators Portman and Heinrich have already helped write a definition for “explainable AI” in law). Sokolov suggested that the National Institute of Standards and Technology (NIST), one of the most “underappreciated and underfunded agencies in our government for what they do,” has a strong leadership role to play in this area.  

In addition to developing standards, the panel recognized a need to develop processes for measuring adherence to standards, and articulating minimum requirements or metrics around concepts like transparency and trustworthiness. Such work can help ensure the development of safe and ethical AI, and is a key strategy for mitigating policy concerns such as bias in facial recognition, and other civil rights issues. 

While laws can be used on the “back end” to address misuse, work is also needed on the “front end” to “make sure you never get the unintended consequences of bias and other ways that [AI] can cause harm,” said Sokolov. By making often opaque processes more open, transparency can help avoid unintended consequences through increased scrutiny, and create accountability and trust.

Beyond Capitol Hill, panelists agreed work on transparency can help meet the needs of a critical stakeholder, the public at large. “I think that AI will only be as useful or as accepted as people feel comfortable that it is, you know, a part of our everyday lives. If you can't explain that to somebody on the street, then I think that a lot of its usage and future could be limited,” said Duggan. “So I think that's one big focus of the Senator and hopefully of the Senate AI caucus next year-how do we increase familiarity, trustworthiness and accountability with AI?” 

Competitiveness, Multi-Sector Partnerships, and Trade

Finally, panelists discussed key issues around US competition, multi-sector partnerships, and trade.

Historically, the United States has lacked a defense strategy that includes an overarching AI strategy. This may have contributed to a lack of cross-sector cooperation and impacted America’s ability to lead, potentially falling behind competitors like China. In addition to cohesive messaging, there must also be willingness between sectors to participate in joint research, share knowledge, and work together to advance America’s leadership.

When asked how to balance US public and private sector leaderships, Sean Duggan provided insight.

“One thing I’ve been concerned about from an armed services perspective is industry reactions to working more closely with the Pentagon or working closely with the Department of Defense,” said Duggan. “There’s a gap between those in Silicon Valley and Ohio and other great places doing AI, and they may have a lack of trust about where their software or their work is going to end up...I know that the Joint Artificial Intelligence Center within the Pentagon is doing a lot of hard work to try to bridge that gap.”

Panelists suggested additional interdependencies between government and other sectors. Government depends on the private sector for some access to computing power:  “The [supercomputing pattern] is not where it needs to be for us to actually be able to compete,” said Richards. “[The] federal government is having to use private computers to do a lot of our research, so we really need, at this time, to rely on the private industry.” Steps are also required to ensure researchers from all sectors can access data. “[W]hat we hear from industry, the number one, best thing we could be doing would be making good government data available,” said Mulopulos.

These discussions will be pushed to the forefront when the 2015 Trade Promotion Authority expires in July. “I think we have no reason to believe that the next administration would not want to renew the trade promotion authority," said Mulopulos. “So, we are setting ourselves up for a must-past vehicle to debate: What should American competitiveness look like over the next five years? Over the next generation?”


Science and Technology Innovation Program

The Science and Technology Innovation Program (STIP) serves as the bridge between technologists, policymakers, industry, and global stakeholders.  Read more