Thoughts on integrating disciplines towards online filtering that balances free speech and censorship[go to overview]
Digital speech online reflects and exacerbates social and political practices that we see offline. Further, digital speech takes place on platforms that attempt to regulate harmful content, but with clear limitations:
1) platforms are businesses, and not governmental institutions such as the constitutional court that is designed to decide for citizens what is wrong or right; 2) algorithms are not good at considering the context and nuance of words.
As a result, our most-used social media platforms regularly draw negative attention by failures of filtering: for example, by blocking minorities while failing to filter those who are hateful against minorities.The problem is thus: Users and platforms depend on algorithms for detecting and regulating harmful social practices (censorship), but algorithms are unable to provide fair discussions that do not exclude any users due to their social traits (free speech).One solution may be to evaluate the outcomes of algorithmic filtering under varying social and legal variables until we yield constant results even if applied on different social groups (such as genders, ethnicity, and cultures). This manner of approaching algorithmic filtering goes beyond just computer science or just social science, and even has the potential to improve legal regulations, considering that current EU-wide regulations are pushing the platforms to delete “more harmful content faster” instead of “better”.
For answering the mounting demands towards filtering in digital society, the disciplines must converge their perspectives to define online filtering as a shared socio-technical problem. This model can be explained as the joining between the three concepts of “legal norms”, “social practices”, and “algorithmic methods”. Via these concepts, I explain four stages of interdisciplinary combinations and joint outcomes.
28.10.19 - 10:15