STEVEN LEVY
AS A JOURNALIST, one thing I appreciate about Gary Marcus is that he always makes time for a chat. The last time we met face-to-face was late last year in New York City, where he fit me in between a series of press interviews, including NPR, CNN, the BBC, and the Big Kahuna, a taping of 60 Minutes with Leslie Stahl.
When I called Marcus this week for an update on his Never Ending Tour to critique AI, he made sure to Zoom with me the next day, tweaking his schedule to avoid conflict with a Morning Joe hit. It was a good day for Marcus: the New York Times Sunday Magazine had just gone online with a lengthy Marcus interview conducted by its talk maven, David Marchese, whose previous subjects have included Thomas Piketty, Tom Stoppard, and Iggy Pop.
The success of large language models like OpenAI’s ChatGPT, Google’s Bard, and a host of others has been so spectacular that it’s literally scary. This week President Biden summoned the lords of AI to figure out what to do about it. Even some of the people building models, like OpenAI CEO Sam Altman, recommend some form of regulation. And the discussion is a global one; Italy even banned OpenAI’s bot for a while.
The sudden urgency about the benefits and evils of AI has made it a sufficiently hot media topic to create an instant demand for camera-ready experts—particularly those who have takes hot enough to sustain an extended sound bite. It is a moment made for Marcus, a 53-year-old entrepreneur and NYU professor emeritus who now lives in Vancouver, coincidentally the site of his recent TED talk on constraining AI. In addition to his inevitable Substack “The Road to A.I. We Can Trust” and his podcast Humans vs. Machines—currently number 6 on Apple’s chart for tech pods—Marcus has ascended to become one of the go-to talking heads on this breakout topic, in such demand that one applauds his restraint in not creating a Marcus-bot so he could share his AI concerns with Andrew Ross Sorkin and Anderson Cooper at the same time.
Marcus’ résumé makes him an unusual, though not unqualified, candidate as a spokes-expert in AI (though some people dispute his bona fides). For his 23 years at NYU, he was in psychology, not computer science. But he’s been fascinated with minds and machines since he was 8 years old and was sufficiently schooled in the topic to cofound an AI company called Geometric Intelligence. In 2016 he sold the firm to Uber, then for a short period became then-CEO Travis Kalanick’s AI czar, not a great credential for someone arguing for responsibility in the field. But Marcus didn’t stick around at Uber, and he later cofounded a robotics firm, Robust AI, which he left in 2021.
While pursuing his interests in AI, Marcus presented himself as a skeptic of what was becoming the dominant technology in the field—deep learning neural networks. He argued that collections of mathematical nodes with weird black-box behavior were overrated and that there was still a critical role for old-school AI, based on reasoning and logic. At one point he publicly debated his fellow NYU professor Yann LeCun, a deep learning pioneer who is Meta’s chief AI scientist and a recent Turing Award recipient, on the subject. The debate was gentlemanly on both sides, but later, Marcus’ insistence that the accomplishments of deep learning were overrated led to gloves coming off and a sniping war between the two, conducted in part via Twitter. LeCun’s responses to Marcus’ jibes at deep learning generally expressed the view that Marcus doesn’t know what he’s talking about. “I don’t engage in vacuous debates,” LeCun once tweeted, after Marcus asked him to respond to some charge on deep learning’s limitations. “I build stuff. You should try it sometimes.” (LeCun demurred when invited to comment on Marcus for this column.)
Back then–only months ago—Marcus’ quibbling was technical. But now that large language models have become a global phenomenon, his focus has shifted. The crux of Marcus’ new message is that the chatbots from OpenAI, Google, and others are dangerous entities whose powers will lead to a tsunami of misinformation, security bugs, and defamatory “hallucinations” that will automate slander. This seems to court a contradiction. For years Marcus had charged that the claims of AI’s builders are overhyped. Why is AI now so formidable that society must now restrain it?
Marcus, always loquacious, has an answer: “Yes, I’ve said for years that [LLMs] are actually pretty dumb, and I still believe that. But there's a difference between power and intelligence. And we are suddenly giving them a lot of power.” In February he realized that the situation was sufficiently alarming that he should spend the bulk of his energy addressing the problem. Eventually, he says, he’d like to head a nonprofit organization devoted to making the most, and avoiding the worst, of AI.
Marcus argues that in order to counter all the potential harms and destruction, policymakers, governments, and regulators have to hit the brakes on AI development. Along with Elon Musk and dozens of other scientists, policy nerds, and just plain freaked-out observers, he signed the now-famous petition demanding a six-month pause in training new LLMs. But he admits that he doesn’t really think such a pause would make a difference and that he signed mostly to align himself with the community of AI critics. Instead of a training time-out, he’d prefer a pause in deploying new models or iterating current ones. This would presumably have to be forced on companies, since there’s fierce, almost existential, competition between Microsoft and Google, with Apple, Meta, Amazon, and uncounted startups wanting to get into the game.
Marcus has an idea for who might do the enforcing. He has lately been insistent that the world needs, immediately, “a global, neutral, nonprofit International Agency for AI,” which would be referred to with an acronym that sounds like a scream (Iaai!).
As he outlined in an op-ed he coauthored in the Economist, such a body might work like the International Atomic Energy Agency, which conducts audits and inspections to identify nascent nuclear programs. Presumably this agency would monitor algorithms to make sure they don’t include bias or promote misinformation or take over power grids while we aren’t looking. While it seems a stretch to imagine the United States, Europe, and China all working together on this, maybe the threat of an alien, if homegrown, intelligence overthrowing our species might lead them to act in the interests of Team Human. Hey, it worked with that other global threat, climate change! Uh …
In any case, the discussion about controlling AI will gain even more steam as the technology weaves itself deeper and deeper into our lives. So expect to see a lot more of Marcus and a host of other talking heads. And that’s not a bad thing. Discussion about what to do with AI is healthy and necessary, even if the fast-moving technology may well develop regardless of any measures that we painstakingly and belatedly adopt. The rapid ascension of ChatGPT into an all-purpose business tool, entertainment device, and confidant indicates that, scary or not, we want this stuff. Like every other huge technological advance, superintelligence seems destined to bring us irresistible benefits, even as it changes the workplace, our cultural consumption, and inevitably, us.
On the other hand, it’s hard to ignore warnings when they come from the field’s most celebrated innovators. This week, Geoffrey Hinton, known as the godfather of deep learning, left Google so he could speak more freely about the dangers of the AI he helped develop. Marcus hails the moment as a great development for the responsible AI movement. But don’t expect the two to pair up for a duet of Cassandra ballads. Hinton is not a Marcus fan. His University of Toronto homepage takes not one, not two, but three drive-by swipes at Marcus. Though Marcus is clearly hurt by this public disdain, he prefers to view the bright side of Hinton’s brickbats. “It speaks to how concerned he is that people might take me seriously,” he says. Let no one say that Gary Marcus doesn’t think well on his feet.
Time Travel
In 2015 I met with Hinton in Mountain View, California, to discuss how deep learning was changing Google search, and so much more. The story I wrote, “Google Search Will Be Your Next Brain,” was a bit ahead of its time. And I admit, the concept of Bing being your next brain was inconceivable to me eight years ago.
“I need to know a bit about your background,” says Geoffrey Hinton. “Did you get a science degree?”
Hinton, a sinewy, dry-witted Englishman by way of Canada, is standing at a white board in Mountain View, California, on the campus of Google, the company he joined in 2013 as a Distinguished Researcher. Hinton is perhaps the world’s premier expert on neural network systems, an artificial intelligence technique that he helped pioneer in the mid 1980s. (He once remarked he’s been thinking about neural nets since he was sixteen.) For much of the period since then, neural nets—which roughly simulate the way the human brain does its learning—have been described as a promising means for computers to master difficult things like vision and natural language. After years of waiting for this revolution to arrive, people began to wonder whether the promises would ever be kept.
But about ten years ago, in Hinton’s lab at the University of Toronto, he and some other researchers made a breakthrough that suddenly made neural nets the hottest thing in AI. Not only Google but other companies such as Facebook, Microsoft and IBM began frantically pursuing the relatively minuscule number of computer scientists versed in the black art of organizing several layers of artificial neurons so that the entire system could be trained, or even train itself, to divine coherence from random inputs, much in a way that a newborn learns to organize the data pouring into his or her virgin senses. With this newly effective process, dubbed Deep Learning, some of the long-standing logjams of computation (like being able to see, hear, and be unbeatable at Breakout) would finally be untangled. The age of intelligent computer systems—long awaited and long feared — would suddenly be breathing down our necks. And Google search would work a whole lot better.
This breakthrough will be crucial in Google Search’s next big step: understanding the real world to make a huge leap in accurately giving users the answers to their questions as well as spontaneously surfacing information to satisfy their needs. To keep search vital, Google must get even smarter.
Ask Me One Thing
Eric asks: “Despite climate change problems in India, Apple is moving manufacturing there. Are corporate executives and boards factoring climate change into their decisions?”
Thanks, Eric. I’m not sure if it makes a difference to the global environment whether iPhones are made in China or India. The whole world is under threat. Your question, though, is more about the well-publicized intentions of Apple and other corporate entities that claim that they care about the environment.
The short answer to the question is that many executives and board members do factor in climate change when they make decisions. Apple has committed to make its entire business 100 percent carbon neutral by 2030. That’s a serious commitment, and it’s good that Apple made it. Even better if it makes good on the commitment. But the company is still going to build factories, mine rare earth elements, and do the other stuff necessary to sell billions of products. That’s just business. Other companies, of course, make and consume fossil fuel products with wanton disregard—and successfully lobby against environmental restrictions. Sometimes those same companies advertise how green they are.
Ultimately, it’s a mistake to rely on corporations to solve climate change. It’s up to the public to demand that our leaders make the huge economic and technical changes to mitigate the impending disaster that Earth will suffer—and for our leaders to listen. Every US senator that dares say in public things like global warming might cut heating costs in his home state is an indictment of his party, our political system, and ultimately us.
No comments:
Post a Comment