MATT LASLO
WHEN IT COMES to artificial intelligence, United States senators are looking to the titans of Silicon Valley to fix a Senate problem—a problem today’s political class perpetuates daily with their increasingly hyper-partisan ways, which generative AI now feeds off of as it helps rewrite our collective future.
Today, the Senate is hosting a first-of-its-kind, closed-door AI forum led by the likes of Elon Musk, Mark Zuckerberg, Bill Gates, and more than 17 others, including ethicists and academics. Even though they’ll be on the senators' turf, for roughly six hours, they’ll get microphones while the nation’s elected leaders get muzzled.
“All Senators are encouraged to attend to listen to this important discussion, but please note the format will not afford Senators the opportunity to provide remarks or to ask questions to the speakers,” a notice from majority leader Chuck Schumer reads.
There’s a problem, though: Schumer’s facilitating the wrong conversation. As generative AI is poised to flood the internet with more—and more convincing—disinformation and misinformation, many AI experts say the top goal of the Senate should be restoring faith in, well, the Senate itself.
“The government is, in my opinion, based on a belief in process over a result—that if the process is equitable, we'll live with the results, whether you agree with it or not,” says Dan Mintz, chair of the Department of Information Technology at the University of Maryland Global Campus. “But people now don't believe in the process, and they don't believe in the result.”
Facts are increasingly a quaint notion fading in our collective rearview. In the past few elections, the truth we want—whether myth, reality, or a hodgepodge of the two—has been a couple of clicks away in the recesses of the web. But generative AI is only helping politicians easily create believable fictions that appeal to our most base biases before the same technology is deployed to bring the dark uses of tech into the light of our social media feeds.
But many politicians haven’t gotten the post-fact memo, which is why most lawmakers are praising Google’s recent announcement that it will require disclosure of “synthetic,” AI-generated content in political ads.
“It’s a real concern. We have to have a way for folks to easily verify that what they're seeing is reality,” says Michigan senator Gary Peters, head of the Democratic Senatorial Campaign Committee.
But can new technology do what today’s political leaders have failed to do and restore faith in the American political system? Doubtful. Americans—with unseen assistance from the algorithms that now run our digital lives—increasingly live in different political universes. Some 69 percent of Republicans now believe US president Joe Biden lost in 2020, while upwards of 90 percent of the GOP thinks news outlets intentionally publish lies. On the other side, 85 percent of Democrats think former president Donald Trump is guilty of interfering with the 2020 election.
“We now truly believe that facts are malleable, and so the ability to move people is becoming more difficult. So I think the big problem of deepfakes is not that it's going to have this direct impact on the election, it is that it's going to have an even greater contribution to decreasing the faith of people in institutions,” Mintz says.
Congress could force all tech companies to watermark AI-generated content, as many on Capitol Hill support, but that would amount to window dressing in today’s political climate.
“Honestly, I don't think that that is going to solve the problem,” says Chinmayi Arun, executive director of the Information Society Project and a research scholar at Yale Law School. “It’s a rebuilding of trust, but the new technologies also make a disruptive version of this possible. And that's also kind of why maybe it's necessary to label them so that people know that.”
At least one senator seems to agree. Senator J. D. Vance, an Ohio Republican, says it may be a good thing for all of us to mistrust what we see online. “I'm actually pretty optimistic that over the long term, what it's going to do is just make people disbelieve everything they see on the internet, but I think in the interim, it actually could cause some real disruptions,” Vance says.
In 2016 and 2020, misinformation and disinformation became synonymous with American politics, but we’ve now entered a deepfake era marked by the democratization of the tools of deception, subtle as they may be, with a realistic voice-over here or a precisely polished fake photo there.
Generative AI doesn’t just help easily remake the world into one’s political fantasies, its power is also in its ability to precisely transmit those fakes to the most ideologically vulnerable communities where they have the greatest ability to spark a raging e-fire. Vance doesn’t see one legislative fix for these complex and intertwined issues.
“There's probably, on the margins, things that you can do to help, but I don't think that you can really control these viral things until there's just a generalized level of skepticism, which I do think we'll get there,” Vance says.
“Scripted” Political Theater
Over the summer, Schumer and a bipartisan group of senators led three private all-Senate AI briefings, which have now dovetailed into these new tech forums.
The briefings are a change for a chamber filled with 100 camera-loving politicians who are known for talking. During normal committee hearings, senators have become experts at raising money—and sometimes gaining knowledge—off asking made-for-YouTube questions, but not this time. While they won’t be able to question the assembled tech experts this week, Schumer and the other hosts will be playing puppet masters off stage.
“It’s intended to be a guided conversation. It’s scripted questions, and those questions are all designed to elicit a myriad of different thoughts on a range of policy areas for the benefit of staffers and members alike,” says senator Todd Young, an Indiana Republican.
Young is part of Schumer’s bipartisan group of four senators—along with senators Martin Heinrich, a New Mexico Democrat, and Mike Rounds, a South Dakota Republican—who’ve been spearheading these private Senate AI study sessions.
While there’s no official timeline, Young doesn’t expect the Senate AI forums to be wrapped up until this winter or early next spring.
The sessions may be bipartisan, but the two parties remain worlds apart when it comes to potential policy. True to form, Democrats are calling for new regulations while Republicans are tapping the brakes on the idea.
“In most of these areas, you already have existing statutes that prohibit the behaviors that we want to continue to be prohibited,” Young says. “So the policy challenge then becomes to ensure that within government, our existing regulatory and enforcement mechanisms are attuned to an AI-enabled world.”
While many Democrats are calling for a new AI agency, there are unlikely to be the votes for that in the GOP, making it increasingly likely that presidents will be forced to put an “AI czar” inside their administration, without having to go through the formal nomination process that requires Senate approval.
“I think you'll probably need someone to coordinate policymaking activities across different agencies of government that will probably be located within the White House, [which] could be analogous to a national security adviser,” Young says.
National security advisers aren’t elected, which is how former president Barack Obama was able to have Susan Rice in his White House even after she became the Republicans’ favored political piñata. It’s also how Trump was able to get conspiracy-peddling Michael Flynn in his White House—for 22 days, before he was forced out for lying.
Other senators are also looking for ways to bypass the narrowly divided Senate, let alone the always-warring GOP-controlled House of Representatives.
“One of the things we could do is make clear that the FEC [Federal Election Commission] has jurisdiction to take this up and look at it,” says Senator Martin Heinrich, the New Mexico Democrat helping Schumer with these AI briefings. “I think they probably do, but I'm not sure that view is held by all of the members. So we should make that eminently clear.”
While the two parties are moving further apart the more they study artificial intelligence, some are looking for ways to combine both parties’ traditional concerns into one all-encompassing argument for action.
“I think that the chances improve dramatically if you can build an alliance between those who want to protect elections and those who want to protect your confidence in the public markets—suddenly you have strange bedfellows coming together,” says Senator Mark Warner, a Democrat from Virginia who also chairs the Intelligence Committee.
Warner spent decades in tech, cofounding the business that evolved into Nextel before he oversaw the past few elections from his perch as Intel chair, where he saw foreign intrusion firsthand. While he applauds Google’s first step toward protecting the public against inflammatory AI-generated nonsense, he says it falls woefully short.
“What I worry about is if it's individual—platform by platform—making their own decisions about what falls in and falls out. We've seen that in the past,” Warner says. “That doesn't work.”
It may not have worked in the past, but that doesn’t mean Congress did anything about it. That’s how Twitter (now X) went from banning political advertisements in the 2022 midterms to announcing it will allow political ads in 2024. Other platforms also change their policies at will.
Follow the Money
While the millionaires and billionaires Schumer is assembling are flush with cash—whether their own or their investors—the government isn’t. Or, at the very least, lawmakers haven’t earmarked billions in this emerging generative AI field to try to counter the private sector.
“We have seen very little investment in this direction. So just compare that with how much money OpenAI is making, how many investments they have attracted—compared to, you know, that meager amount of staff at Darpa [Defense Advanced Research Projects Agency] or financial support for this research,” says Siwei Lyu, a State University of New York Empire Innovation professor in the Department of Computer Science and Engineering at SUNY University at Buffalo.
“That's a huge astronomical imbalance in those numbers, so we need the government to pay more attention and invest in counter-technologies to this,” Lyu says.
While Lyu and other academics have called for investments in these counter-technologies for years, Congress dithered. And now Schumer is giving the monied CEOs his chamber’s microphones. Lyu has been in media forensics for two decades and has seen this before.
“That's the classical clash between capitalism—making money, making it profitable—against social goods,” Lyu says. “Everything needs the government's more active involvement in this process.”
Once a digitally daft chamber, today—after a summer of studying AI—most senators feel savvy enough on the topic to have a few earfuls of complaints for the giants of Silicon Valley. But this week, senators known for their excruciating ability to fill dead air with the sound of their own voices will once again be required to sit and listen to an artificial discussion on artificial intelligence.
But when they do speak, generative AI will be listening. It will then recreate our real world in their hyper-partisan image, and that’s a problem party leaders aren’t addressing. Because as of now, AI may be disrupting a lot, but it's yet to make a disruptive dent in the politics of business-as-usual in Washington.
No comments:
Post a Comment