Steven Levy
Michal Kosinski is a Stanford research psychologist with a nose for timely subjects. He sees his work as not only advancing knowledge, but alerting the world to potential dangers ignited by the consequences of computer systems. His best-known projects involved analyzing the ways in which Facebook (now Meta) gained a shockingly deep understanding of its users from all the times they clicked “like” on the platform. Now he’s shifted to the study of surprising things that AI can do. He’s conducted experiments, for example, that indicate that computers could predict a person’s sexuality by analyzing a digital photo of their face.
I’ve gotten to know Kosinski through my writing about Meta, and I reconnected with him to discuss his latest paper, published this week in the peer-reviewed Proceedings of the National Academy of Sciences. His conclusion is startling. Large language models like OpenAI’s, he claims, have crossed a border and are using techniques analogous to actual thought, once considered solely the realm of flesh-and-blood people (or at least mammals). Specifically, he tested OpenAI's GPT-3.5 and GPT-4 to see if they had mastered what is known as “theory of mind.” This is the ability of humans, developed in the childhood years, to understand the thought processes of other humans. It’s an important skill. If a computer system can’t correctly interpret what people think, its world understanding will be impoverished and it will get lots of things wrong. If models do have theory of mind, they are one step closer to matching and exceeding human capabilities. Kosinski put LLMs to the test and now says his experiments show that in GPT-4 in particular, a theory of mind-like ability “may have emerged as an unintended by-product of LLMs’ improving language skills … They signify the advent of more powerful and socially skilled AI.”
No comments:
Post a Comment