Social media platforms are increasingly relying on artificial intelligence (AI) and machine learning (ML)-based tools to moderate and curate organic content online, and target and deliver advertisements. Many of these tools are designed to maximize engagement, which means they also have the potential to amplify sensationalist and harmful content such as misinformation and disinformation. This memo explores how AI and ML-based tools used for ad-targeting and delivery, content moderation, and content ranking and recommendation systems are spreading and amplifying misinformation and disinformation online.
It also outlines existing legislative proposals in the United States and in the European Union that aim to tackle these issues. It concludes with recommendations for how internet platforms and policymakers can better address the algorithmic amplification of misleading information online. These include: encouraging platforms to provide greater transparency around their policies, processes, and impact; direct more resources towards improving fact-checking, moderation efforts, and the development of effective AI and ML-based tools; provide users with access to more robust controls; and provide researchers with access to meaningful data and robust tools. Although platforms have made some progress in implementing such measures, we as a coalition believe that platforms can do more to meaningfully and effectively combat the spread of misinformation and disinformation online. However, recognizing the financial incentives underlying platforms’ advertising-driven business models—and their influence on platform approaches to misinformation and disinformation—we encourage lawmakers to pursue appropriate legislation and policies in order to promote greater transparency and accountability around online efforts to combat misleading information.
No comments:
Post a Comment