Pages

26 February 2023

What are ‘robot rights,’ and should AI chatbots have them?

Benjamin Powers

AI chatbots are all the rage. From ChatGPT to Bing’s new AI-powered search engine and Google’s new Bard chatbot, people are obsessed with seeing how they can replace tasks with AI and test its limits.

Much of researchers’ and journalists’ concerns about the new AI wave have focused on bots’ potential to generate bad answers and misinformation — and its potential to displace human workers. But David Gunkel, a professor of communication studies at Northern Illinois University, is wrestling with a different question: What rights should robots, including AI chatbots, have?

The question has taken on new urgency since the New York Times published an interview with Bing’s AI, Sydney, in which the AI said it loved the reporter, and the Washington Post interviewed Sydney without mentioning that the reporter was a reporter.

Grid spoke with Gunkel, the author of “The Machine Question: Critical Perspectives on AI, Robots and Ethics,” about what he means when he talks about AI rights, what the recent surge in attention means for the future and where this will all end.

This conversation has been edited for length and clarity.

Grid: So what background are you bringing to this idea of “robot rights?”

David Gunkel: I’m a professor of media studies at Northern Illinois University, and I specialize in the ethics of emerging technology, especially artificial intelligence and robots. Very early in my career, I realized that everybody was talking about responsibility and who’s responsible for AI conduct and that sort of stuff. But the flip side of that question was rights. How do we engage in deciding the legal status or the standing of these artifacts that we are creating? And so I focus mainly on that side of the question. There’re a few of us who have sort of specialized in that zone, but it is a rather minor thread in the literature.

G: What do you mean when you talk about robot rights and AI?

DG: This is a really important question because as soon as you mobilize the word “rights,” people immediately jump to “he must be talking about human rights and giving human rights to robots. This sounds absurd.” And it is, in fact, absurd because we’re not talking about human rights. When we talk about rights, we’re talking about social recognitions that can be either designated in terms of moral philosophy or in terms of law. So I like to break rights down as [Wesley Newcomb] Hohfeld — who was an American jurist from the 1900s — who says rights are really just power claims, privileges and immunities, and they always come in pairs. If one entity has a right, another entity has a duty or a responsibility to respond to or respect that right. When we talk about robot rights or the rights of AI, we’re talking about social integrations of these technologies for the purposes of protecting our moral and legal institutions.

How do we need to situate these things with regard to our current legal practices so that we’re able to make sense of the challenges and opportunities that are before us? I’ll give you just a real very basic example of where this is actually happening. In 12 states in the U.S., we now have legislatures that have passed laws that give rights to robots operating on sidewalks and streets.

These are rights related to these personal delivery robots, giving the robot the rights and responsibilities of a pedestrian when it’s in the crosswalk. Now we’re not giving it the right to vote, we’re not giving it the right to life. We’re just saying when there’s a conflict in a crosswalk between who has the right of way, we recognize the robot functions as a pedestrian. Therefore, the law recognizes that as having the same rights and responsibilities that a human pedestrian would have in the same circumstances. So it’s a matter of scaling up our legal and moral sensibilities to deal with the opportunities and challenges presented to us by these new objects.

G: But some people will hit robots in a crosswalk and not care. How do you think about the extension of either empathy or that articulation of rights to things that are digital?

DG: There’s a lot of projection that goes on with regard to these kinds of objects because they have a social presence. They have a way of intruding on us, our social realm. And we oftentimes project into objects human traits or traits — like we project on animals — and that’s called anthropomorphism. And a lot of times, anthropomorphism is seen as being the kind of bug that we have to fix, like “don’t do that; it’s the wrong way to think about robots.”

But I think that is a little extreme. I think we’re going to recognize that anthropomorphism is a crucial component of our being social, the way that we are able to socialize with each other, the way that we are able to understand animals. The way that we are able to engage in all kinds of social practices requires that we often project mental states into things that maybe aren’t there. And so, rather than trying to get rid of anthropomorphism altogether, I think we have to learn how to manage it. And I think the real challenge in the face of these new technologies is how we are going to manage anthropomorphism. How can we best mobilize this capability and create some restrictions and regulations that allow us to function appropriately in an environment that is now populated by more than just human individuals?

G: How do AI chatbots like ChatGPT and Bing’s fit into this framework you’re thinking about?

DG: The thing that really fired this was the article in the Washington Post, in which the staff writer, who was unnamed, engaged Bing’s AI in conversation; in the process, it comes out that the chatbot is expressing outrage at not knowing that it’s talking to a journalist, and the journalist is going to write a story about it. And it hasn’t given consent to the journalist the same way a human being would have to do during an interview — to give consent to use the quotes from that process. Now, that is something said or generated by the algorithm. The way this will most likely be taken care of in very practical terms isn’t whether or not the algorithm claims to have a right to privacy or consent or anything like that — rather, the company who provides the service, in this case, Microsoft, would most likely include in the terms of service that users have to agree to some stipulations on how content can be used. And it turns out that Microsoft has, in fact, done this.

On the first of February of this year, they did an update to their terms of service called “Bing conversational experiences and image creator terms.” The seventh item on their list of terms of services that users of the product have to agree to says that they’re subject to compliance with this agreement and that per the Microsoft services agreement in our code of conduct, you may use creations for any personal, noncommercial purpose that’s legal. In effect, that is a kind of requirement for consent. It’s saying if you want to commercialize any of the generated content from this algorithm, you are prohibited from doing so due to the terms of service; if you want us to do something, to grant you a license to do so or agree to do so, you may have to contact Microsoft to get their consent — the same way you would have to contact the parent of a child if you as a journalist were interviewing a child and wanted to include the name and the contents of what that child says in your story.

This is really tricky legal terrain because this is a contract mechanism. But as we know from video games and other platforms like Facebook and Instagram, it’s the terms of service that really is the law of the land with regard to these digital technologies. So I think we’re really seeing this evolve in ways that are going to be prototyped by these various experiments Microsoft is engaged in and OpenAI is engaged in with ChatGPT.

G: What are you looking at going forward when it comes to combing “robot rights” with the utility of things like ChatGPT for humans?

DG: Maybe instrumentalism is a better word, but I’m looking at a couple of things. One, I’m definitely keeping an eye on the terms of service in the user license, because these contracts can be changed very quickly, and they can scale rather expediently and be rather agile to changes in the market as these things are rolled out and utilized. I imagine, and we’ve seen this in the video game industry, these terms of service will evolve and kind of fits and starts as we have crises erupt, and then solutions in the terms of service to get out in front of the next crisis. So that’s the corporate side of things.,

I think on the public side of things, you’re going to find governments getting interested in how we regulate this service in terms of vulnerable users, in terms of higher education, in terms of content that is utilized for government decision making, legislative procedures, and the like. You can bet that like all things in the regulation realm, you’ll probably see the EU take the lead in this because they tend to be more statutory oriented with regards to the laws and the U.S. probably being a little more hands off and wanting to see where the technology goes before getting heavy handed with any kind of regulation. But I think we’re going to see efforts on both the public and the private side of things.

G: So where will we be in 10-20 years?

DG: There have already been some highly publicized cases that have been evolving in the courts in the area of intellectual property. An AI generator creates some original content, whether it be in text or visual form, like you have with Stable Diffusion and Midjourney and other art creating programs or in music. Because right now, with Google’s recent example of a generator of music, who’s going to hold the intellectual property? Who is the holder of the copyright? Who is the composer? How do we assign authorship to these kinds of things? And this is not just a legal matter — it also has to do with authority: Who is it that’s talking? And how do we assess authority in the source of the content? Because oftentimes, the way we invest authority in the spoken word and written word is by going back to the original speaker or author. So how do we find the originator? How do we find the authority behind a statement that comes out of ChatGPT or Bing’s chat AI?

That’s one of the places you’re going to see this really play out in trying to decide not only the nuts and bolts and very practical things about IP law but also about attribution, authority, who’s speaking and how can we anchor these things in authorship. Because you have to cite an author as the source of something, right? I think that complicates a lot of standard practices in journalism and academic writing — but also in pointing to someone who we can say is the person who said this. This is something that is obviously going to be in play.

I think another thing that’s going to be in play is because these models are built by training them on data that is gathered from digital sources online — whether it be books or text or images — is how to credit and protect the original artists from which the content is scraped. There’s a huge move right now in the area of visual arts to try to get some foothold on this. So the artists whose work is being used as training data for these algorithms are put out of work by the algorithm drawing on their own artwork as a way of creating new content. I think this is another big concern for human creatives and people in various creative industries if, indeed, these things are being trained on data that is easily and freely accessible to them. How do we attribute it back to the original artists? How do we compensate original artists for the training data that feeds these various models?

No comments:

Post a Comment