Jerome Pesenti leads the development of artificial intelligence at one of the world’s most influential—and controversial—companies. As VP of artificial intelligence at Facebook, he oversees hundreds of scientists and engineers whose work shapes the company’s direction and its impact on the wider world.
AI is fundamentally important to Facebook. Algorithms that learn to grab and hold our attention help make the platform and its sister products, Instagram and WhatsApp, stickier and more addictive. And, despite some notable AI flops, like the personal assistant M, Facebook continues to use AI to build new features and products, from Instagram filters to augmented reality apps.
Mark Zuckerberg has promised to deploy AI to help solve some of the company’s biggest problems, by policing hate speech, fake news, and cyberbullying (an effort that has seen limited success so far). More recently, Facebook has been forced to reckon with how to stop AI-powered deception in the form of deepfake videos that could convincingly spread misinformation as well as enable new forms of harassment.
Pesenti joined Facebook in January 2018, inheriting a research lab created by Yann Lecun, one of the biggest names in the field. Before that, he worked on IBM’s Watson AI platform and at Benevolent AI, a company that is applying the technology to medicine.
Pesenti met with Will Knight, senior writer at WIRED, near its offices in New York. The conversation has been edited for length.
Will Knight: AI has been presented as a solution to fake news and online abuse, but that may oversell its power. What progress are you really making there?
Jerome Pesenti: Moderating automatically, or even with humans and computers working together, at the scale of Facebook is a super challenging problem. But we’ve made a lot of progress.
Early on, the field made progress on vision—understanding scenes and images. We’ve been able to apply that in the last few years for recognizing nudity, recognizing violence, and understanding what's happening in images and videos.
Recently there’s been a lot of progress in the field of language, allowing us a much more refined understanding of interactions through the language that people use. We can understand if people are trying to bully, if it’s hate speech, or if it’s just a joke. By no measure is it a solved problem, but there's clear progress being made.
WK: What about deepfakes?
JP: We’re taking that very seriously. We actually went around and created new deepfake videos, so that people could test deepfake detection techniques. It’s a really important challenge that we are trying to be proactive about. It’s not really significant on the platform at the moment, but we know it can be very powerful. We’re trying to be ahead of the game, and we’ve engaged the industry and the community.
WK: Let’s talk about AI more generally. Some companies, for instance DeepMind and OpenAI, claim their objective is to develop “artificial general intelligence.” Is that what Facebook is doing?
JP: As a lab, our objective is to match human intelligence. We're still very, very far from that, but we think it’s a great objective. But I think many people in the lab, including Yann, believe that the concept of “AGI” is not really interesting and doesn't really mean much.
On the one hand, you have people who assume that AGI is human intelligence. But I think it's a bit disingenuous because if you really think of human intelligence, it is not very general. Then other people project onto AGI the idea of the singularity—that if you had an AGI, then you will have an intelligence that can make itself better, and keep improving. But there’s no real model for that. Humans can’t can’t make themselves more intelligent. I think people are kind of throwing it out there to pursue a certain agenda.
WK: Facebook’s AI lab was built by LeCun, one of the pioneers of deep learning who recently won the Turing Award for his work in the area. What do you make of critics of the field’s focus on deep learning, who say it won’t bring us real intelligence?
"We are very very far from human intelligence," says Jerome Pesenti, Facebook's vice president of artificial intelligence.COURTESY OF FACEBOOK
JP: Deep learning and current AI, if you are really honest, has a lot of limitations. We are very very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it’s not easy to explain, it doesn't have common sense, it’s more on the level of pattern matching than robust semantic understanding. But we’re making progress in addressing some of these, and the field is still progressing pretty fast. You can apply deep learning to mathematics, to understanding proteins, there are so many things you can do with it.
WK: Some AI experts also talk about a “reproducibility crisis,” or the difficulty of recreating groundbreaking research. Do you see that as a big problem?
JP: It’s something that Facebook AI is very passionate about. When people do things that are not reproducible, it creates a lot of challenges. If you cannot reproduce it, it’s a lot of lost investment.
We believe that reproducibility brings a lot of value to the field. It not only helps people validate results, it also enables more people to understand what's happening and to build upon that. The beauty of AI is that it is ultimately systems run by computers. So it is a prime candidate, as a subfield of science, to be reproducible. We believe the future of AI will be something where it’s reproducible almost by default. We try to open source most of the code we are producing in AI, so that other people can build on top of it.
WK: OpenAI recently noted that the compute power required for advanced AI is doubling every 3 and a half months. Are you worried about this?
"Deep learning and current AI, if you are really honest, has a lot of limitations."
JEROME PESENTI
JP: That’s a really good question. When you scale deep learning, it tends to behave better and to be able to solve a broader task in a better way. So, there's an advantage to scaling. But clearly the rate of progress is not sustainable. If you look at top experiments, each year the cost it going up 10-fold. Right now, an experiment might be in seven figures, but it’s not going to go to nine or ten figures, it’s not possible, nobody can afford that.
It means that at some point we're going to hit the wall. In many ways we already have. Not every area has reached the limit of scaling, but in most places, we're getting to a point where we really need to think in terms of optimization, in terms of cost benefit, and we really need to look at how we get most out of the compute we have. This is the world we are going into.
WK: What did you learn from commercializing AI at IBM with Watson? What have you tried to copy, and what have you tried to avoid, at Facebook?
JP: Watson was a really fun time, and I think IBM called out that this is a commercial market and there are actually applications. I think that was really remarkable. But there was a bit too much overhyping. I don’t think that served IBM very well.
When you have a place like Facebook, it's remarkable the rate of usage within the organization. The number of developers using AI within Facebook is more than doubling every year right now. So, we need to explain that it’s useful, but don’t overhype it. It doesn’t serve us to claim it can do things it cannot. And I don’t need to overhype it to justify the existence of my team.
WK: Facebook has sometimes struggled to turn AI research into a commercial success, for example with M. How are you trying to connect research and engineering more effectively?
Keep Reading
The latest on artificial intelligence, from machine learning to computer vision and more
JP: When you start talking about technology transfer, it means you're already lost the battle. You cannot just pick some research and ask other people to try to put it in production. You can’t just throw it over the fence. The best way to set it up is to get people doing fundamental research working with people who are closer to the product. It's really an organizational challenge—to ensure there's a set of projects that mature over time and bring the people along with them, rather than have boundaries where you have scientists on one side, and they just throw their research over the fence.
WK: What kinds of new AI products should we expect from Facebook in the near term, then?
JP: The two core uses of AI today in Facebook are making the platform safer for users and making sure what we show users is valuable to them. But some of the most exciting things we’re doing are trying to create new experiences that are only possible with AI. Both augmented reality and virtual reality can only exist with AI. We saw recently you can interact with VR using your hands, which requires a really subtle understanding of what’s around the headset. It parses the whole scene using just a camera so that you can use your hands as controllers. I also believe there is huge potential in making people more creative. You’re seeing that with some of the competing offerings like TikTok. Many people create videos and content by interacting naturally with the medium, rather than being a specialist or a video editor or an artist.
WK: Could the technology behind deepfakes perhaps be put to such creative ends?
JP: Absolutely. We need to be aware of both sides. There's a lot of potential for making people more creative and empowering them. But as we’ve learned over the past few years, we need to use the technology responsibly, and we need to be aware of the unintended consequences before they happen.
WK: What do you think about the idea of AI export controls? Can the technology be restricted? Would that harm the field?
JP: My personal opinion is that this seems very impractical to implement. Beyond that, though, it could negatively impact progress in research, forcing work to be less reproducible rather than more. I believe openness and collaboration is important for driving advances in AI, and restricting the publication or open-sourcing of the results of fundamental research would risk slowing the progress of the field.
That said, whether or not such controls are put in place, as responsible researchers we should continue to consider the risks of potential misapplications and how we can help to mitigate those, while still ensuring that our work advancing AI is as open and reproducible as possible.
No comments:
Post a Comment