BILLY PERRIGO
Will artificial intelligence take our jobs? If you listen to Silicon Valley executives talking about the capabilities of today’s cutting edge AI systems, you might think the answer is “yes, and soon.”
But a new paper published by MIT researchers suggests automation in the workforce might happen slower than you think.
The researchers at MIT’s computer science and artificial intelligence laboratory studied not only whether AI was able to perform a task, but also whether it made economic sense for firms to replace humans performing those tasks in the wider context of the labor market.
They found that while computer vision AI is today capable of automating tasks that account for 1.6% of worker wages in the U.S. economy (excluding agriculture), only 23% of those wages (0.4% of the economy as a whole) would, at today’s costs, be cheaper for firms to automate instead of paying human workers. “Overall, our findings suggest that AI job displacement will be substantial, but also gradual—and therefore there is room for [government] policy and retraining to mitigate unemployment impacts,” the authors write.
Tasks like analyzing images from diagnostic equipment in a hospital, or examining trays to ensure they contain the right items, are given in the paper as examples of the kind of “vision tasks” that today’s AI could feasibly achieve. But tasks like these are often so fragmented, the authors argue, that it is uneconomical to automate them.
“Even though there is some change that is coming, there is also some time to adapt to it,” Neil Thompson, the study’s lead author, tells TIME. “It’s not going to happen so rapidly that everything is thrown into chaos right away.”
Unless it does. The study only focused on computer vision AI—systems that are able to recognize and categorize objects in images and videos—rather than more flexible systems like multimodal large language models, of which OpenAI’s GPT-4 is an example. A recent study from OpenAI estimated that 19% of U.S. workers could see 50% of their workplace tasks “impacted” by GPT-4 level systems—a far higher estimate than the MIT researchers’ study that focuses solely on computer vision. A crucial question for the economy in the age of AI will be whether the MIT study’s findings apply to more “general” AI tools—ones that promise to automate most forms of cognitive labor that can be done behind a computer screen.
The MIT researchers found it can be expensive for companies to “fine tune” computer vision systems to make them suited for a specific, specialized task. While such an investment may make economic sense for the largest companies, it is often not cheaper for a small outfit that could simply retain a trained worker who already performs the task well. This dynamic is a key reason, according to the MIT paper, that not all tasks AI is capable of doing today are also economically viable to replace humans in. (The paper, submitted to the journal Management Science, has not yet been peer-reviewed.)
But it’s unclear whether this dynamic will carry over to language tasks. To “fine tune” a computer vision model to, for example, distinguish specific types of medicine bottle from one another to a 99.9% degree of accuracy, you would need to collect large quantities of labeled images of different medicines, which can be a costly and cumbersome process (even if low-wage workers in impoverished countries were drafted in to do it on the cheap). You would then have to pay the significant computing costs of fine-tuning an AI model on that large store of data.
On the other hand, fine-tuning a cutting-edge language model to carry out a specific task can simply be a matter of giving it a detailed list of written rules. An OpenAI study from August last year found that GPT-4 was able to effectively carry out the task of content moderation on digital platforms after being fine-tuned using a detailed policy document and just a few labeled examples. Those findings suggest that large language models can be applied to a wide range of economic tasks far more quickly, and cheaply, than computer vision models.
Fine-tuning for GPT-4 is still in a restricted beta mode, as OpenAI works to mitigate the significant safety challenges this level of customizability can present. But as OpenAI and its competitors begin allowing customers to fine-tune their most advanced models, the economy may begin to see levels of automation, or augmentation, that progress faster than the MIT study predicts.
“It is certainly plausible that customizing large language models may be easier than customizing computer vision systems and that this could lead to more adoption in the economy,” Thompson tells TIME. But, “so long as a small engineering team is needed to integrate the system into the company’s workflow, costs are still restrictive.”
No comments:
Post a Comment