What’s The Harm In The Online Safety Bill?

Another post analysing aspects of the draft Online Safety Bill.

Throughout the development of the government’s Online Harms policy, a central concern of ORG and other human rights organisations is how any legally mandated content moderation policy could practically be achieved. The algorithmic moderation deployed by most social media companies is notoriously literal, and the human review of content is often performed by people who are unaware of the context in which messages are sent.

These flaws result in false positives (acceptable content being removed) and false negatives (unacceptable content remaining visible).

The draft Online Safety Bill considers two distinct types of content: illegal content, and content that is legal but which has the potential to cause harm. The social media companies will have to abide by OFCOM’s code of practice in relation to both.

The definitions of these two types of content are defined are therefore crucial to the coherence of the new regulatory system. Ambiguous definitions will make it harder for social media platforms to moderate their content. If the new system causes more acceptable content to be taken down, while allowing illegal and/or harmful content to remain on the platforms, then the law will be a failure.

Read the rest of this post on the Open Rights Group blog.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.