NIK POPLI / SAN FRANCISCO
Artificial intelligence is widely expected to transform our lives. Leaders from across the sector gathered for a TIME dinner conversation on Nov. 30, where they emphasized the need to center humans in decisions around incorporating the technology into workflows and advocated for governments and industry leaders to take a responsible approach to managing the risks the technology poses.
As part of the TIME100 Talks series in San Francisco, senior correspondent Alice Park spoke with panelists Cynthia Breazeal, a pioneer in social robotics and the Dean for Digital Learning at MIT, James Landay, a computer science professor and vice director of the Institute for Human-Centered AI at Stanford University, and Raquel Urtasun, CEO and founder of self-driving tech startup Waabi, which recently put a fleet of trucks into service on Uber Freight’s trucking network. The panelists discussed the ethical considerations of AI and the ways in which leaders can ensure its benefits reach every corner of the world.
During the discussion, the three panelists highlighted the transformative journey of AI and delved into its profound implications, emphasizing the need for responsible AI deployment. Landay reflected on the pivotal moment nearly a decade ago when neural networks started to make significant strides. He said the AI boom occurred when these neural networks transitioned from theoretical promises to practical applications, infiltrating products like smartphones and revolutionizing speech recognition. “We started to see that this was going to affect every company everywhere in our lives,” Landay said of AI. “That this technology was going to have a profound impact, both on the positive but potentially on the negative.”
Looking ahead, Landay noted the important role humans play in ensuring safety. “We need to be able to look at this technology and help shape it for good and help lead the field on how to design this technology in a way that would work well for people,” he said.
Asked about the role of artificial intelligence in the workforce, the panelists acknowledged the technology’s capacity to revolutionize certain job functions and eradicate tedious tasks. But they agreed that while AI can handle certain tasks efficiently, it remains crucial to preserve and enhance uniquely human qualities such as creativity and social understanding. “Humans right now are just way better than any AI system on the social and emotional [level],” Breazeal said, emphasizing the importance of viewing AI as a complement to human capabilities rather than a replacement.
Research has found that the rise of AI has triggered a deep sense of unease and source of stress among the general public, as many fear their jobs will be replaced by more intelligent artificial systems. But the panelists disagreed with that notion.
“There are a lot of jobs that are really important to what makes us human,” Landay added. “We heard some of the things about creativity, about understanding people. We're social animals. Those are things that AI is not going to do well anytime in the near future.”
Landay continued: “The real thing we want to do with AI is figure out how we can use it as an augmentation of what makes us human.” In the healthcare sector, for example, he said that AI will not replace the job of a radiologist but rather that radiologists who use AI will replace those who do not use the technology in their practices. “It's really important for educating our workforce to understand how to use AI and that we as designers and technologists are building systems that will help people be better at the jobs that they do. And that's where AI is going to have a positive impact.”
Urtasun said that while her self-driving truck startup eliminates most human truck drivers, it allows those workers to consider more attractive jobs that don’t require being on the road for weeks. “Driving trucks is a very, very hard job to do. People do not want to go weeks without seeing their families. If you look at the average age of that truck driver 20 years ago, it was 35. 10 years ago it was 45. Today it’s 55,” she said. “I think that this technology has so much potential in terms of removing some of the things that humans really shouldn't be doing.”
The U.S. is yet to pass a broad national AI law, though President Joe Biden recently signed an executive order seeking to reduce the risks the technology poses to workers, consumers and national security. Asked about the push to regulate AI, Landay said that the “laissez-faire” approach taken by regulators during the rise of social media—in the hope that companies would do the right thing—likely won’t suffice for AI. “There are still going to be folks who do the wrong thing because they want to make more money,” he said. “And at that point, this is where government regulation [will] have to come into play.” Landay also advocated for developers building societal analysis into the design process given AI's community and societal implications: “AI technologies have an impact way beyond the direct user you're designing for… and so we need a new design process that also analyzes the communities beyond the direct users that are going to be impacted by AI systems.”
Urtasun recognized the difficulty for governments to keep pace with evolving technology and stressed the need for collaboration between academia, industry and government. While acknowledging the complexity of regulating emerging technology, she advocated for a responsible approach from industry leaders, incorporating safety considerations from the outset of design. Drawing lessons from industries like self-driving cars, where safety is non-negotiable, she suggested that similar stringent frameworks could be adopted to ensure the responsible development and deployment of AI technologies.
The panelists also discussed the responsibility academia bears in shaping AI for the greater good. They agreed that schools should seek to empower individuals to interact with AI responsibly, grappling with the societal and ethical implications. With AI tools becoming increasingly accessible and user-friendly, Breazeal explained that AI is “transforming what it means to be digitally literate” similarly to the way social media impacted people. “We need to empower everyone to be able to make things that matter to them with this technology,” she said.
TIME100 Talks: Bringing AI Everywhere was presented by Intel.
No comments:
Post a Comment