by Glyn Moody
It would be an understatement to say that artificial intelligence (AI) is much in the news these days. It's widely viewed as likely to usher in the next big step-change in computing, but a recent interesting development in the field has particular implications for open source. It concerns the rise of "ethical" AI. In October 2016, the White House Office of Science and Technology Policy, the European Parliament's Committee on Legal Affairs and, in the UK, the House of Commons' Science and Technology Committee, all released reports on how to prepare for the future of AI, with ethical issues being an important component of those reports. At the beginning of last year, the Asilomar AI Principles were published, followed by the Montreal Declaration for a Responsible Development of Artificial Intelligence, announced in November 2017.
Abstract discussions of what ethical AI might or should mean became very real in March 2018. It was revealed then that Google had won a share of the contract for the Pentagon's Project Maven, which uses artificial intelligence to interpret huge quantities of video images collected by aerial drones in order to improve the targeting of subsequent drone strikes. When this became known, it caused a firestorm at Google. Thousands of people there signed an internal petition addressed to the company's CEO, Sundar Pichai, asking him to cancel the project. Hundreds of researchers and academics sent an open letter supporting them, and some Google employees resigned in protest.
It later emerged that Google had hoped to win further defense work worth hundreds of millions of dollars. However, in the face of the massive protests, Google management announced that it would not be seeking any further Project Maven contracts after the present one expires in 2019. And in an attempt to answer criticisms that it was straying far from its original "don't be evil" motto, Pichai posted "AI at Google: our principles", although some were unimpressed.
Amazon and Microsoft also are grappling with similar issues about what constitutes ethical use of their AI technologies. But the situation with Google is different, because key to the Project Maven deal with the Pentagon is open-source software—Google's TensorFlow:
...an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs [tensor processing units]), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google's AI organization, it comes with strong support for machine learning and deep learning, and the flexible numerical computation core is used across many other scientific domains.
It's long been accepted that the creators of open-source projects cannot stop their code from being used for purposes with which they may not agree or even strongly condemn—that's why it's called free software. But the use by Google of open-source AI tools for work with the Pentagon does raise a new question. What exactly does the rise of "ethical" AI imply for the Open Source world, and how should the community respond?
Ethical AI represents a significant opportunity for open source. One important aspect of "ethical" is transparency. For example, the Asilomar AI Principles include the following:
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
More generally, people are recognizing that "black box" AI approaches are unacceptable. If such systems are to be deployed in domains where the consequences can be serious and dangerous—perhaps a matter of life or death, as in drone attacks—independent experts must have the ability to scrutinize the underlying software and its operation. The French and British governments already have committed to opening up their algorithms in this way. Open-source software provides a natural foundation for an ethical approach based on transparency.
The current interest in ethical AI means the Open Source community should push for the underlying code to be released under a free software license. Although that goes beyond simple transparency, the manifest success of the open-source methodology in every computing domain (with the possible exception of the desktop), lends weight to the argument that doing so is good not just for transparency, but for efficiency too.
However, as well as a huge opportunity, AI also represents a real threat to free software—not directly, but by virtue of the fact that most of the big breakthroughs in the field are being made by companies with extensive resources. They naturally are interested in making money from AI, so they see it ultimately as just part of the research and development work that will lead to new products. That contrasts with Linux, say, which is first and foremost a community project that involves large-scale—and welcome—collaboration with industry. Currently missing are major open-source AI projects running independently of any company.
There have been some moves to bring the worlds of open source and AI together. For example, in March 2018, The Linux Foundation launched the LF Deep Learning Foundation:
...an umbrella organization that will support and sustain open source innovation in artificial intelligence, machine learning, and deep learning while striving to make these critical new technologies available to developers and data scientists everywhere.
Founding members of LF Deep Learning include Amdocs, AT&T, B.Yond, Baidu, Huawei, Nokia, Tech Mahindra, Tencent, Univa, and ZTE. With LF Deep Learning, members are working to create a neutral space where makers and sustainers of tools and infrastructure can interact and harmonize their efforts and accelerate the broad adoption of deep learning technologies.
As part of that initiative, The Linux Foundation also announced Acumos AI:
...a platform and open source framework that makes it easy to build, share, and deploy AI apps. Acumos standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment. This frees data scientists and model trainers to focus on their core competencies and accelerates innovation.
Both of those are welcome steps, but the list of founding members emphasizes once more how the organization is dominated by companies—many of them from China, which is emerging as a leader in this space. That's no coincidence. As the "Deciphering China's AI Dream" report explains, the Chinese government has made it clear that it wants to be an AI "superpower" and is prepared to expend money and energy to that end. Things are made easier by the country's limited privacy protection laws. As a result, huge quantities of data, including personal data, are available for training AI systems—a real boon for local researchers. Crucially, AI is seen as a tool for social control. Applications include pre-emptive censorship, predictive policing and the introduction of a "social credit system" that will constantly monitor and evaluate the activities of Chinese citizens, rank their level of trustworthiness and reward or punish them accordingly.
Given the Chinese authorities' published priorities, it is unlikely that the development of AI technologies by local companies will pay more than lip service to ethical issues. As the recent incidents involving Google, Amazon and Microsoft indicate, it's not clear that Western companies will do much better. That leaves a vitally important role for open source—to act as beacon of responsible AI software development. That can be achieved only if leaders step forward to propose and initiate ambitious AI projects, and if the coding community embraces and helps realize those plans. If this doesn't happen, 30 years of work in freeing software and its users could be rendered moot by a new generation of inscrutable black boxes running closed-source code—and the world.
Image attribution: Cryteria
Glyn Moody has been writing about the internet since 1994, and about free software since 1995. In 1997, he wrote the first mainstream feature about GNU/Linux and free software, which appeared in Wired. In 2001, his book Rebel Code: Linux And The Open Source Revolution was published. Since then, he has written widely about free software and digital rights. He has a blog, and he is active on social media: @glynmoody on Twitter or identi.ca, and +glynmoody on Google+.
No comments:
Post a Comment